Science.gov

Sample records for accuracy relative error

  1. Accuracy of SMOS Level 3 SSS products related to observational errors

    NASA Astrophysics Data System (ADS)

    Jordà, G.; Gomis, D.

    2009-04-01

    The Soil Moisture and Ocean Salinity (SMOS) mission is the second European Space Agency's (ESA) Earth Explorer Oportunity mission with an expected launch during 2009. The satellite will be equipped with a new type of sensor: the MIRAS (Microwave Imaging Radiometer using Aperture Synthesis, Kerr 1998). This new instrument acquires brightness temperature (Tb) which can be transformed into soil moisture data over land and sea surface salinity (SSS) data over the ocean. SMOS Tb images will have a mean pixel size of about 40 km and a revisiting time of about 3 days, which ensure a large SSS dataset. On the other hand, due to observational limitations on the measured Tb, the observational RMS error for a single SSS measurement is expected to be large. Different studies predict an SSS observational rms error of 1 to 4 psu (Philipps et al., 2007; Sabia et al, 2008). However those are theoretical estimates and it is possible that real errors could be even larger. The goal of this contribution is to quantify the impact of the observational error on the accuracy of L3 gridded products and to establish how a scale selection can increase that accuracy. The L3 mapping algorithm is based in the Optimal Interpolation method (OI, Gandin, 1963). Following Pedder (2003), this formalism have been extended to the convolution of OI with a normal error filter, so that when selecting the smallest scale to be resolved by the analysis the formalism yields an estimation of the resulting analysis error. Therefore, we have an appropiate theoretical framework to explore the reduction of the analysis error induced by the scale selection in the particular case of SMOS data. Some examples of the expected accuracy of SMOS L3 products as a function of observational error and the selected spatial scales will be presented.

  2. Error propagation in relative real-time reverse transcription polymerase chain reaction quantification models: the balance between accuracy and precision.

    PubMed

    Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino

    2006-09-15

    Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212

  3. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  4. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  5. Ethanol, errors, and the speed-accuracy trade-off.

    PubMed

    Tiplady, B; Drummond, G B; Cameron, E; Gray, E; Hendry, J; Sinclair, W; Wright, P

    2001-01-01

    Ethanol has been shown to have a relatively greater effect on error rates in speeded tasks than temazepam, and this may be due to a differential effect on the speed-accuracy trade-off (SATO). This study used different instruction sets to influence the SATO. Forty-nine healthy volunteers (24 males, aged 18-41 years) were allocated at random to one of three instruction conditions--emphasising accuracy, neutral, and emphasising speed. After familiarisation, they took part in two sessions spaced at least 4 days apart in which they received either ethanol (0.8 g/kg, max 60 g males, 50 g females) or placebo in randomised order. Tests were administered starting at 30 and 75 min postdrug. Instructions significantly affected performance. In two maze tasks, one on paper, the other on a pen computer, the pattern of instruction effects was as expected. A significant increase in errors with ethanol was seen for both maze tasks, and there was a tendency to speed up with ethanol (significant only for the pen computer task). Responses to fixed stimulus sequences on the Four-Choice Reaction Test also showed a tendency to speed up and an increase in errors with ethanol, while all other tests showed both slowing and increases in errors with ethanol compared to placebo. Error scores are consistently increased by ethanol in all test situations, while the effects of ethanol on speed are variable across tests. PMID:11509226

  6. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  7. ASRM accuracy improvement with error isolation

    NASA Astrophysics Data System (ADS)

    Watson, T. J.; Jordan, F. W.

    1993-11-01

    The Aerojet Aerotherm and Ballistics Group uses a technique on the Advanced Solid Rocket Motor (ASRM) program called Error Isolation to verify data measurements. This technique requires two basic parts: 1) a reference data set and 2) a set of redundant equations. It is primarily used in verifying ballistics data used to obtain accurate solid propellant burn rates. Hence, the reference data set may be a block of sub-scale test motors cast from a single propellant batch or cast concurrently with an ASRM segment. The set of redundant equations are those normally used to predict or analyze solid propellant rocket motor ballistics performance. Although the concept is universal and can be used to evaluate any set of data subject to prediction by a set of redundant mathematical expressions, it is used in this paper only in the evaluation of data collected for sub-scale test motors. The mathematics consist of a set of equations used to predict interior ballistics for those motors. The sub-scale test motor contains a five inch diameter center perforated (5 inch CP) grain that burns on the bore and both ends but not on the outside surface. This motor configuration is variously called the 5C3-9 or 5 inch CP.

  8. Alterations in Error-Related Brain Activity and Post-Error Behavior over Time

    ERIC Educational Resources Information Center

    Themanson, Jason R.; Rosen, Peter J.; Pontifex, Matthew B.; Hillman, Charles H.; McAuley, Edward

    2012-01-01

    This study examines the relation between the error-related negativity (ERN) and post-error behavior over time in healthy young adults (N = 61). Event-related brain potentials were collected during two sessions of an identical flanker task. Results indicated changes in ERN and post-error accuracy were related across task sessions, with more…

  9. Phase error compensation methods for high-accuracy profile measurement

    NASA Astrophysics Data System (ADS)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Zhang, Zonghua; Jiang, Hao; Yin, Yongkai; Huang, Shujun

    2016-04-01

    In a phase-shifting algorithm-based fringe projection profilometry, the nonlinear intensity response, called the gamma effect, of the projector-camera setup is a major source of error in phase retrieval. This paper proposes two novel, accurate approaches to realize both active and passive phase error compensation based on a universal phase error model which is suitable for a arbitrary phase-shifting step. The experimental results on phase error compensation and profile measurement of standard components verified the validity and accuracy of the two proposed approaches which are robust when faced with changeable measurement conditions.

  10. Does naming accuracy improve through self-monitoring of errors?

    PubMed

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings. PMID:26863091

  11. Entropic error-disturbance relations

    NASA Astrophysics Data System (ADS)

    Coles, Patrick; Furrer, Fabian

    2014-03-01

    We derive an entropic error-disturbance relation for a sequential measurement scenario as originally considered by Heisenberg, and we discuss how our relation could be tested using existing experimental setups. Our relation is valid for discrete observables, such as spin, as well as continuous observables, such as position and momentum. The novel aspect of our relation compared to earlier versions is its clear operational interpretation and the quantification of error and disturbance using entropic quantities. This directly relates the measurement uncertainty, a fundamental property of quantum mechanics, to information theoretical limitations and offers potential applications in for instance quantum cryptography. PC is funded by National Research Foundation Singapore and Ministry of Education Tier 3 Grant ``Random numbers from quantum processes'' (MOE2012-T3-1-009). FF is funded by Japan Society for the Promotion of Science, KAKENHI grant No. 24-02793.

  12. Theoretical Accuracy for ESTL Bit Error Rate Tests

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin

    1998-01-01

    "Bit error rate" [BER] for the purposes of this paper is the fraction of binary bits which are inverted by passage through a communication system. BER can be measured for a block of sample bits by comparing a received block with the transmitted block and counting the erroneous bits. Bit Error Rate [BER] tests are the most common type of test used by the ESTL for evaluating system-level performance. The resolution of the test is obvious: the measurement cannot be resolved more finely than 1/N, the number of bits tested. The tolerance is not. This paper examines the measurement accuracy of the bit error rate test. It is intended that this information will be useful in analyzing data taken in the ESTL. This paper is divided into four sections and follows a logically ordered presentation, with results developed before they are evaluated. However, first-time readers will derive the greatest benefit from this paper by skipping the lengthy section devoted to analysis, and treating it as reference material. The analysis performed in this paper is based on a Probability Density Function [PDF] which is developed with greater detail in a past paper, Theoretical Accuracy for ESTL Probability of Acquisition Tests, EV4-98-609.

  13. The effects of noise masking and required accuracy on speech errors, disfluencies, and self-repairs.

    PubMed

    Postma, A; Kolk, H

    1992-06-01

    The covert repair hypothesis views disfluencies as by-products of covert self-repairs applied to internal speech errors. To test this hypothesis we examined effects of noise masking and accuracy emphasis on speech error, disfluency, and self-repair rates. Noise reduced the numbers of disfluencies and self-repairs but did not affect speech error rates significantly. With accuracy emphasis, speech error rates decreased considerably, but disfluency and self-repair rates did not. With respect to these findings, it is argued that subjects monitor errors with less scrutiny under noise and when accuracy of speaking is unimportant. Consequently, covert and overt repair tendencies drop, a fact that is reflected by changes in disfluency and self-repair rates relative to speech error rates. Self-repair occurrence may be additionally reduced under noise because the information available for error detection--that is, the auditory signal--has also decreased. A qualitative analysis of self-repair patterns revealed that phonemic errors were usually repaired immediately after their intrusion. PMID:1608244

  14. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  15. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  16. Quantum rms error and Heisenberg's error-disturbance relation

    NASA Astrophysics Data System (ADS)

    Busch, Paul

    2014-09-01

    Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012)] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)] include claims of a violation of Heisenberg's error-disturbance relation. In contrast, a Heisenberg-type tradeoff relation for joint measurements of position and momentum has been formulated and proven in [Phys. Rev. Lett. 111, 160405 (2013)]. Here I show how the apparent conflict is resolved by a careful consideration of the quantum generalization of the notion of root-mean-square error. The claim of a violation of Heisenberg's principle is untenable as it is based on a historically wrong attribution of an incorrect relation to Heisenberg, which is in fact trivially violated. We review a new general trade-off relation for the necessary errors in approximate joint measurements of incompatible qubit observables that is in the spirit of Heisenberg's intuitions. The experiments mentioned may directly be used to test this new error inequality.

  17. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  18. Optical system error analysis and calibration method of high-accuracy star trackers.

    PubMed

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  19. Evaluating point cloud accuracy of static three-dimensional laser scanning based on point cloud error ellipsoid model

    NASA Astrophysics Data System (ADS)

    Chen, Xijiang; Hua, Xianghong; Zhang, Guang; Wu, Hao; Xuan, Wei; Li, Moxiao

    2015-01-01

    Evaluation of static three-dimensional (3-D) laser scanning point cloud accuracy has become a topical research issue. Point cloud accuracy is typically estimated by comparing terrestrial laser scanning data related to a finite number of check point coordinates against those obtained by an independent source of higher accuracy. These methods can only estimate the point accuracy but not the point cloud accuracy, which is influenced by the positional error and sampling interval. It is proposed that the point cloud error ellipsoid is favorable for inspecting the point cloud accuracy, which is determined by the individual point error ellipsoid volume. The kernel of this method is the computation of the point cloud error ellipsoid volume and the determination of the functional relationship between the error ellipsoid and accuracy. The proposed point cloud accuracy evaluation method is particularly suited for small sampling intervals when there exists an intersection of two error ellipsoids, and is suited not only for planar but also for nonplanar target surfaces. The performance of the proposed method (PM) is verified using both planar and nonplanar board point clouds. The results demonstrate that the proposed evaluation method significantly outperforms the existing methods when the target surface is nonplanar or there exists an intersection of two error ellipsoids. The PM therefore has the potential for improving the reliability of point cloud digital elevation models and static 3-D laser scanning-based deformation monitoring.

  20. Accuracy and Repeatability of Refractive Error Measurements by Photorefractometry

    PubMed Central

    Rajavi, Zhale; Sabbaghi, Hamideh; Baghini, Ahmad Shojaei; Yaseri, Mehdi; Sheibani, Koroush; Norouzi, Ghazal

    2015-01-01

    Purpose: To determine the accuracy of photorefraction and autorefraction as compared to cycloautorefraction and to detect the repeatability of photorefraction. Methods: This diagnostic study included the right eyes of 86 children aged 7-12 years. Refractive status was measured using photorefraction (PlusoptiX SO4, GmbH, Nürnberg, Germany) and autorefraction (Topcon RM800, USA) with and without cycloplegia. Photorefraction for each eye was performed three times to assess repeatability. Results: The overall agreement between photorefraction and cycloautorefraction was over 81% for all refractive errors. Photorefractometry had acceptable sensitivity and specificity for myopia and astigmatism. There was no statistically significant difference considering myopia and astigmatism in all comparisons, while the difference was significant for hyperopia using both amblyogenic (P = 0.006) and nonamblyogenic criteria (P = 0.001). A myopic shift of 1.21 diopter (D) and 1.58 D occurred with photorefraction in nonamblyogenic and amblyogenic hyperopia, respectively. Using revised cut-off points of + 1.12 D and + 2.6 D instead of + 2.00 D and + 3.50 D improved the sensitivity of photorefractometry to 84.62% and 69.23%, respectively. The repeatability of photorefraction for measurement of myopia, astigmatism and hyperopia was acceptable (intra-cluster correlation [ICC]: 0.98, 0.94 and 0.77, respectively). Autorefraction results were significantly different from cycloautorefraction in hyperopia (P < 0.0001), but comparable in myopia and astigmatism. Also, noncycloglegic autorefraction results were similar to photorefraction in this study. Conclusion: Although photorefraction was accurate for measurement of myopia and astigmatism, its sensitivity for hyperopia was low which could be improved by considering revised cut-off points. Considering cut-off points, photorefraction can be used as a screening method. PMID:26730305

  1. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  2. Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers.

    PubMed

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  3. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    NASA Astrophysics Data System (ADS)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  4. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.

    PubMed

    Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100

  5. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  6. Analysis of instrumentation error effects on the identification accuracy of aircraft parameters

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.

    1972-01-01

    An analytical investigation is presented of the effect of unmodeled measurement system errors on the accuracy of aircraft stability and control derivatives identified from flight test data. Such error sources include biases, scale factor errors, instrument position errors, misalignments, and instrument dynamics. Two techniques (ensemble analysis and simulated data analysis) are formulated to determine the quantitative variations to the identified parameters resulting from the unmodeled instrumentation errors. The parameter accuracy that would result from flight tests of the F-4C aircraft with typical quality instrumentation is determined using these techniques. It is shown that unmodeled instrument errors can greatly increase the uncertainty in the value of the identified parameters. General recommendations are made of procedures to be followed to insure that the measurement system associated with identifying stability and control derivatives from flight test provides sufficient accuracy.

  7. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  8. Spacecraft-spacecraft very long baseline interferometry. Part 1: Error modeling and observable accuracy

    NASA Technical Reports Server (NTRS)

    Edwards, C. D., Jr.; Border, J. S.

    1992-01-01

    In Part 1 of this two-part article, an error budget is presented for Earth-based delta differential one-way range (delta DOR) measurements between two spacecraft. Such observations, made between a planetary orbiter (or lander) and another spacecraft approaching that planet, would provide a powerful target-relative angular tracking data type for approach navigation. Accuracies of better than 5 nrad should be possible for a pair of spacecraft with 8.4-GHz downlinks, incorporating 40-MHz DOR tone spacings, while accuracies approaching 1 nrad will be possible if the spacecraft incorporate 32-GHz downlinks with DOR tone spacing on the order of 250 MHz; these accuracies will be available for the last few weeks or months of planetary approach for typical Earth-Mars trajectories. Operational advantages of this data type are discussed, and ground system requirements needed to enable spacecraft-spacecraft delta DOR observations are outlined. This tracking technique could be demonstrated during the final approach phase of the Mars '94 mission, using Mars Observer as the in-orbit reference spacecraft, if the Russian spacecraft includes an 8.4-GHz downlink incorporating DOR tones. Part 2 of this article will present an analysis of predicted targeting accuracy for this scenario.

  9. Morphological Awareness and Children's Writing: Accuracy, Error, and Invention

    ERIC Educational Resources Information Center

    McCutchen, Deborah; Stull, Sara

    2015-01-01

    This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade US students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in…

  10. Speed and Accuracy of Rapid Speech Output by Adolescents with Residual Speech Sound Errors Including Rhotics

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Edwards, Mary Louise

    2009-01-01

    Children with residual speech sound errors are often underserved clinically, yet there has been a lack of recent research elucidating the specific deficits in this population. Adolescents aged 10-14 with residual speech sound errors (RE) that included rhotics were compared to normally speaking peers on tasks assessing speed and accuracy of speech…

  11. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  12. The Effects of Noise Masking and Required Accuracy on Speech Errors, Disfluencies, and Self-Repairs.

    ERIC Educational Resources Information Center

    Postma, Albert; Kolk, Herman

    1992-01-01

    This study, involving 32 adult speakers of Dutch, strengthens the covert repair hypothesis of disfluency. It found that emphasis on speech accuracy causes lower speech error rates but does not affect disfluency and self-repair rates, noise masking reduces disfluency and self-repair rates but does not affect speech error numbers, and internal…

  13. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  14. Preschoolers Monitor the Relative Accuracy of Informants

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Koenig, Melissa; Harris, Paul L.

    2007-01-01

    In 2 studies, the sensitivity of 3- and 4-year-olds to the previous accuracy of informants was assessed. Children viewed films in which 2 informants labeled familiar objects with differential accuracy (across the 2 experiments, children were exposed to the following rates of accuracy by the more and less accurate informants, respectively: 100% vs.…

  15. Capturing L2 Accuracy Developmental Patterns: Insights from an Error-Tagged EFL Learner Corpus

    ERIC Educational Resources Information Center

    Thewissen, Jennifer

    2013-01-01

    The present article addresses the issue of second language accuracy developmental trajectories and shows how they can be captured via an error-tagged version of an English as a Foreign Language (EFL) learner corpus. The data used in this study were extracted from the International Corpus of Learner English (Granger et al., 2009) and consist of a…

  16. Accuracies and conservation errors of various ghost fluid methods for multi-medium Riemann problem

    NASA Astrophysics Data System (ADS)

    Xu, Liang; Liu, Tiegang

    2011-06-01

    Since the (original) ghost fluid method (OGFM) was proposed by Fedkiw et al. in 1999 [5], a series of other GFM-based methods such as the gas-water version GFM (GWGFM), the modified GFM (MGFM) and the real GFM (RGFM) have been developed subsequently. Systematic analysis, however, has yet to be carried out for the various GFMs on their accuracies and conservation errors. In this paper, we develop a technique to rigorously analyze the accuracies and conservation errors of these different GFMs when applied to the multi-medium Riemann problem with a general equation of state (EOS). By analyzing and comparing the interfacial state provided by each GFM to the exact one of the original multi-medium Riemann problem, we show that the accuracy of interfacial treatment can achieve "third-order accuracy" in the sense of comparing to the exact solution of the original mutli-medium Riemann problem for the MGFM and the RGFM, while it is of at most "first-order accuracy" for the OGFM and the GWGFM when the interface approach is actually near in balance. Similar conclusions are also obtained in association with the local conservation errors. A special test method is exploited to validate these theoretical conclusions from the numerical viewpoint.

  17. Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors

    PubMed Central

    Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.

    2013-01-01

    The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC

  18. Accuracy of image-plane holographic tomography with filtered backprojection: random and systematic errors.

    PubMed

    Belashov, A V; Petrov, N V; Semenova, I V

    2016-01-01

    This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results. PMID:26835625

  19. Electronic Inventory Systems and Barcode Technology: Impact on Pharmacy Technical Accuracy and Error Liability

    PubMed Central

    Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.

    2015-01-01

    Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799

  20. Proof of Heisenberg's Error-Disturbance Relation

    NASA Astrophysics Data System (ADS)

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F.

    2013-10-01

    While the slogan “no measurement without disturbance” has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state.

  1. Proof of Heisenberg's error-disturbance relation.

    PubMed

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F

    2013-10-18

    While the slogan "no measurement without disturbance" has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state. PMID:24182239

  2. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  3. Measurement accuracy analysis and error correction of CCD light-projection diameter measurement system

    NASA Astrophysics Data System (ADS)

    Song, Qing; Zhang, Chunsong; Huang, Jiayong; Wu, Di; Liu, Jing

    2009-11-01

    The error source of the external diameter measurement system based on the double optical path parallel light projection method are the non-parallelism of the double optical path, aberration distortion of the projection lens, the edge of the projection profile of the cylinder which is affected by aperture size of the illuminating beam, light intensity variation and the counting error in the circuit. The screw pair drive is applied to achieve the up-and-down movement in the system. The precision of up-and-down movement mainly lies on the Abbe Error which is caused by the offset between the centerline and the mobile line of the capacitive-gate ruler, the heeling error of the guide mechanism, and the error which is caused by the dilatometric change of parts resulted from the temperature change. Rotary mechanism is achieved by stepper motor and gear drive. The precision of the rotary mechanism is determined by the stepping angle error of the stepper motor, the gear transmission error, and the heeling error of the piston relative to the rotation axis. The method of error modification is putting a component in the optical path to get the error curve, which is then used in the point-by-point modification by software compensation.

  4. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  5. The effect of clock, media, and station location errors on Doppler measurement accuracy

    NASA Technical Reports Server (NTRS)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  6. A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.

    PubMed

    Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W

    2012-09-01

    In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. PMID:22284909

  7. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements

    PubMed Central

    Barshan, Billur

    2008-01-01

    An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  8. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments. PMID:23270978

  9. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    PubMed Central

    Hwang, Andrew B; Franc, Benjamin L; Gullberg, Grant T; Hasegawa, Bruce H

    2009-01-01

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50% when imaging with iodine-125, and up to 25% when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30%, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50%) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the use of resolution

  10. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  11. Volumetric compensation of accuracy errors in a multi-robot surgical platform.

    PubMed

    Vicentini, Federico; Magnoni, Paolo; Giussani, Matteo; Tosatti, Lorenzo Molinari

    2015-08-01

    A multi-robot platform, made of a hybrid parallel kinematic machine and 2 KUKA LWR arms, is dedicated to open skull neuro-surgical tasks. Sub-millimeter accuracy is clearly required for both the absolute tool tracking and for good performances in motion compensation when the head is set free to move. An analysis of the sources of inaccuracies, mostly derived from the calibration phase, illustrates that errors are insufficiently reduced by stand-alone calibrations of the single robots. A method for volumetric compensation of errors is reported. A compensation transform is, in fact, computed during an offline training phase for a set of discretized subregions of the constrained head workspace. At runtime, a compensation motion is applied to robots so as to reach the desired real targets on anatomical parts. The resulting end-to-end static accuracy is distributed with median 0.75 mm and below 1 mm for the 95% of tests, with a 1:36 reduction factor from the starting conditions. The accuracy is evaluated also in dynamic tests with mild oscillatory patterns. PMID:26737394

  12. Accuracy Analysis of Anisotropic Yield Functions based on the Root-Mean Square Error

    NASA Astrophysics Data System (ADS)

    Huh, Hoon; Lou, Yanshan; Bae, Gihyun; Lee, Changsoo

    2010-06-01

    This paper evaluates the accuracy of popular anisotropic yield functions based on the root-mean square error (RMSE) of the yield stresses and the R-values. The yield functions include Hill48, Yld89, Yld91, Yld96, Yld2000-2d, BBC2000 and Yld2000-18p yield criteria. Two kind steels and five kind aluminum alloys are selected for the accuracy evaluation. The anisotropic coefficients in yield functions are computed from the experimental data. The downhill simplex method is utilized for the parameter evaluation for the yield function except Hill48 and Yld89 yield functions after the error functions are constructed. The yield stresses and the R-values at every 15°from the rolling direction (RD) and the yield stress and R-value at equibiaxial tension conditions are predicted from each yield function. The predicted yield stresses and R-values are then compared with the experimental data. The root-mean square errors (RMSE) are computed to quantitatively evaluate the yield function. The RMSEs are calculated for the yield stresses and the R-values separately because the yield stress difference is much smaller that the difference in the R-values. The RMSEs of different yield functions are compared for each material. The Hill48 and Yld89 yield functions are the worst choices for the anisotropic description of the yield stress anisotropy while Yld91 yield function is the last choice for the modeling of the R-value directionality. Yld2000-2d and BBC2000 yield function have the same accuracy on the modeling of both the yield stress anisotropy and the R-value anisotropy. The best choice is Yld2000-18 yield function to accurately describe the yield tress and R-value directionalities of sheet metals.

  13. An assessment of accuracy, error, and conflict with support values from genome-scale phylogenetic data.

    PubMed

    Taylor, Derek J; Piel, William H

    2004-08-01

    Despite the importance of molecular phylogenetics, few of its assumptions have been tested with real data. It is commonly assumed that nonparametric bootstrap values are an underestimate of the actual support, Bayesian posterior probabilities are an overestimate of the actual support, and among-gene phylogenetic conflict is low. We directly tested these assumptions by using a well-supported yeast reference tree. We found that bootstrap values were not significantly different from accuracy. Bayesian support values were, however, significant overestimates of accuracy but still had low false-positive error rates (0% to 2.8%) at the highest values (>99%). Although we found evidence for a branch-length bias contributing to conflict, there was little evidence for widespread, strongly supported among-gene conflict from bootstraps. The results demonstrate that caution is warranted concerning conclusions of conflict based on the assumption of underestimation for support values in real data. PMID:15140947

  14. Error compensation method for improving the accuracy of biomodels obtained from CBCT data.

    PubMed

    Santolaria, J; Jiménez, R; Rada, M; Loscos, F

    2014-03-01

    This paper presents a method of improving the accuracy of the tridimensional reconstruction of human bone biomodels by means of tomography, with a view to finite element modelling or surgical planning, and the subsequent manufacturing using rapid prototyping technologies. It is focused on the analysis and correction of the results obtained by means of cone beam computed tomography (CBCT), which is used to digitalize non-superficial biological parts along with a gauge part with calibrated dimensions. A correction of both the threshold and the voxel size in the tomographic images and the final reconstruction is proposed. Finally, a comparison between a reconstruction of a gauge part using the proposed method and the reconstruction of that same gauge part using a standard method is shown. The increase in accuracy in the biomodel allows an improvement in medical applications based on image diagnosis, more accurate results in computational modelling, and improvements in surgical planning in situations in which the required accuracy directly affects the procedure's results. Thus, the subsequent constructed biomodel will be affected mainly by dimensional errors due to the additive manufacturing technology utilized, not because of the 3D reconstruction or the image acquisition technology. PMID:24080232

  15. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 13 2011-04-01 2011-04-01 false Accuracy-related penalty. 1.6662-2 Section 1.6662-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Additions to the Tax, Additional Amounts, and Assessable Penalties § 1.6662-2 Accuracy-related penalty. (a)...

  16. Dissociable correlates of response conflict and error awareness in error-related brain activity

    PubMed Central

    Hughes, Gethin; Yeung, Nick

    2010-01-01

    Errors in speeded decision tasks are associated with characteristic patterns of brain activity. In the scalp-recorded EEG, error processing is reflected in two components, the error-related negativity (ERN) and the error positivity (Pe). These components have been widely studied, but debate remains regarding the precise aspects of error processing they reflect. The present study investigated the relation between the ERN and Pe using a novel version of the flanker task to allow a comparison between errors reflecting different causes—response conflict versus stimulus masking. The conflict and mask conditions were matched for overall behavioural performance but differed in underlying response dynamics, as indexed by response time distributions and measures of lateralised motor activity. ERN amplitude varied in relation to these differing response dynamics, being significantly larger in the conflict condition compared to the mask condition. Furthermore, differences in response dynamics between participants were predictive of modulations in ERN amplitude. In contrast, Pe activity varied little between conditions, but varied across trials in relation to participants‘ awareness of their errors. Taken together, these findings suggest a dissociation between the ERN and Pe, with the former reflecting the dynamics of response selection and conflict, and the latter reflecting conscious recognition of an error. PMID:21130788

  17. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. PMID:26698389

  18. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  19. Standard errors of non-standardised and age-standardised relative survival of cancer patients

    PubMed Central

    Jansen, L; Hakulinen, T; Brenner, H

    2012-01-01

    Background: Relative survival estimates cancer survival in the absence of other causes of death. Previous work has shown that standard errors of non-standardised relative survival may be substantially overestimated by the conventionally used method. However, evidence was restricted to non-standardised relative survival estimates using Hakulinen's method. Here, we provide a more comprehensive evaluation of the accuracy of standard errors including age-standardised survival and estimation by the Ederer II method. Methods: Five- and ten-year non-standardised and age-standardised relative survival was estimated for patients diagnosed with 25 common forms of cancer in Finland in 1989–1993, using data from the nationwide Finnish Cancer Registry. Standard errors of mutually comparable non-standardised and age-standardised relative survival were computed by the conventionally used method and compared with bootstrap standard errors. Results: When using Hakulinen's method, standard errors of non-standardised relative survival were overestimated by up to 28%. In contrast, standard errors of age-standardised relative survival were accurately estimated. When using the Ederer II method, deviations of the standard errors of non-standardised and age-standardised relative survival were generally small to negligible. Conclusion: In most cases, overestimations of standard errors are effectively overcome by age standardisation and by using Ederer II rather than Hakulinen's method. PMID:22173672

  20. Scaling Relation for Occulter Manufacturing Errors

    NASA Technical Reports Server (NTRS)

    Sirbu, Dan; Shaklan, Stuart B.; Kasdin, N. Jeremy; Vanderbei, Robert J.

    2015-01-01

    An external occulter is a spacecraft own along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. The shape of an external occulter must be specially designed to optimally suppress starlight and deviations from the ideal shape due to manufacturing errors can result loss of suppression in the shadow. Due to the long separation distances and large dimensions involved for a space occulter, laboratory testing is conducted with scaled versions of occulters etched on silicon wafers. Using numerical simulations for a flight Fresnel occulter design, we show how the suppression performance of an occulter mask scales with the available propagation distance for expected random manufacturing defects along the edge of the occulter petal. We derive an analytical model for predicting performance due to such manufacturing defects across the petal edges of an occulter mask and compare this with the numerical simulations. We discuss the scaling of an extended occulter test-bed.

  1. Perfect error processing: Perfectionism-related variations in action monitoring and error processing mechanisms.

    PubMed

    Stahl, Jutta; Acharki, Manuela; Kresimon, Miriam; Völler, Frederike; Gibbons, Henning

    2015-08-01

    Showing excellent performance and avoiding poor performance are the main characteristics of perfectionists. Perfectionism-related variations (N=94) in neural correlates of performance monitoring were investigated in a flanker task by assessing two perfectionism-related trait dimensions: Personal standard perfectionism (PSP), reflecting intrinsic motivation to show error-free performance, and evaluative concern perfectionism (ECP), representing the worry of being poorly evaluated based on bad performance. A moderating effect of ECP and PSP on error processing - an important performance monitoring system - was investigated by examining the error (-related) negativity (Ne/ERN) and the error positivity (Pe). The smallest Ne/ERN difference (error-correct) was obtained for pure-ECP participants (high-ECP-low-PSP), whereas the highest difference was shown for those with high-ECP-high-PSP (i.e., mixed perfectionists). Pe was positively correlated with PSP only. Our results encouraged the cognitive-bias hypothesis suggesting that pure-ECP participants reduce response-related attention to avoid intense error processing by minimising the subjective threat of negative evaluations. The PSP-related variations in late error processing are consistent with the participants' high in PSP goal-oriented tendency to optimise their behaviour. PMID:26071226

  2. Potential errors associated with stage-discharge relations for selected streamflow-gaging stations, Maricopa County, Arizona

    USGS Publications Warehouse

    Tillery, Anne C.; Phillips, Jeff V.; Capesius, Joseph P.

    2001-01-01

    Potential errors were derived for individual discharge measurements and stage-discharge relations for 17 streamflow-gaging stations in Maricopa County. Information presented primarily consists of stage and discharge data that were used to develop the stage-discharge relations that were in effect for water year 1998. Accuracy of the discharge measurements directly relate to accuracy of the stage-discharge relation developed for each site. Stage-discharge relations generally are developed using direct measurements of stage and discharge, indirect measurements of peak discharge, and theoretical weir and culvert computations. Accuracy of current-meter measurements of discharge (direct measurements) depends on factors such as the number of subsections in the measurement, stability of the channel, changes in flow conditions, and accuracy of the equipment. Accuracy of indirect measurements of peak discharge is determined by the accuracy of discharge coefficients and flow type selected for the computations. The accuracy of indirect peak-discharge computations generally is less than the accuracy associated with current-meter measurements. Current-meter measurements, indirect measurements of discharge, weir and culvert computations, and step-backwater computations are graphically represented on plots of the stage-discharge relations. Potential errors associated with the discharge measurements at selected sites are depicted as error bars on the plots. Potential errors derived for discharge measurements at 17 sites range from 5 to 25 percent. Errors generally are greater for measurements of large flows in channels having unstable controls using indirect methods.

  3. Prediction error and accuracy of intraocular lens power calculation in pediatric patient comparing SRK II and Pediatric IOL Calculator

    PubMed Central

    2010-01-01

    data were analysed to compare the mean prediction error and the accuracy of predictability of intraocular lens power calculation between SRK II and Pediatric IOL Calculator. Results There were 16 eyes in SRK II group and 15 eyes in Pediatric IOL Calculator group. The mean prediction error in the SRK II group was 1.03 D (SD, 0.69 D) while in Pediatric IOL Calculator group was 1.14 D (SD, 1.19 D). The SRK II group showed lower prediction error of 0.11 D compared to Pediatric IOL Calculator group, but this was not statistically significant (p = 0.74). There were 3 eyes (18.75%) in SRK II group achieved acccurate predictability where the refraction postoperatively was within ± 0.5 D from predicted refraction compared to 7 eyes (46.67%) in the Pediatric IOL Calculator group. However the difference of the accuracy of predictability of postoperative refraction between the two formulas was also not statistically significant (p = 0.097). Conclusions The prediction error and the accuracy of predictability of postoperative refraction in pediatric cataract surgery are comparable between SRK II and Pediatric IOL Calculator. The existence of the Pediatric IOL Calculator provides an alternative to the ophthalmologist for intraocular lens calculation in pediatric patients. Relatively small sample size and unequal distribution of patients especially the younger children (less than 3 years) with a short time follow-up (3 months), considering spherical equivalent only. PMID:20738840

  4. Reward value enhances post-decision error-related activity in the cingulate cortex.

    PubMed

    Taylor, Jessica E; Ogawa, Akitoshi; Sakagami, Masamichi

    2016-06-01

    By saying "Anyone who has never made a mistake has never tried anything new", Albert Einstein himself allegedly implied that the making and processing of errors are essential for behavioral adaption to a new or changing environment. These essential error-related cognitive and neural processes are likely influenced by reward value. However, previous studies have not dissociated accuracy and value and so the distinct effect of reward on error processing in the brain remained unknown. Therefore, we set out to investigate this at various points in decision-making. We used functional magnetic resonance imaging to scan participants while they completed a random dot motion discrimination task where reward and non-reward were associated with stimuli via classical conditioning. Pre-error activity was found in the medial frontal cortex prior to response but this was not related to reward value. At response time, error-related activity was found to be significantly greater in reward than non-reward trials in the midcingulate cortex. Finally at outcome time, error-related activity was found in the anterior cingulate cortex in non-reward trials. These results show that reward value enhances post-decision but not pre-decision error-related activities and these results therefore have implications for theories of error correction and confidence. PMID:26739226

  5. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors. PMID:26592783

  6. The Relative Error Magnitude in Three Measures of Change.

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.; Williams, Richard H.

    1982-01-01

    Formulas for the standard error of measurement of three measures of change (simple differences; residualized difference scores; and a measure introduced by Tucker, Damarin, and Messick) are derived. A practical guide for determining the relative error of the three measures is developed. (Author/JKS)

  7. The uncertainty of errors: Intolerance of uncertainty is associated with error-related brain activity.

    PubMed

    Jackson, Felicia; Nelson, Brady D; Hajcak, Greg

    2016-01-01

    Errors are unpredictable events that have the potential to cause harm. The error-related negativity (ERN) is the electrophysiological index of errors and has been posited to reflect sensitivity to threat. Intolerance of uncertainty (IU) is the tendency to perceive uncertain events as threatening. In the present study, 61 participants completed a self-report measure of IU and a flanker task designed to elicit the ERN. Results indicated that IU subscales were associated with the ERN in opposite directions. Cognitive distress in the face of uncertainty (Prospective IU) was associated with a larger ERN and slower reaction time. Inhibition in response to uncertainty (Inhibitory IU) was associated with a smaller ERN and faster reaction time. This study suggests that sensitivity to the uncertainty of errors contributes to the magnitude of the ERN. Furthermore, these findings highlight the importance of considering the heterogeneity of anxiety phenotypes in relation to measures of threat sensitivity. PMID:26607441

  8. High accuracy acoustic relative humidity measurement in duct flow with air.

    PubMed

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments. PMID:22163610

  9. High Accuracy Acoustic Relative Humidity Measurement in Duct Flow with Air

    PubMed Central

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments. PMID:22163610

  10. Refractive Error, Axial Length, and Relative Peripheral Refractive Error before and after the Onset of Myopia

    PubMed Central

    Mutti, Donald O.; Hayes, John R.; Mitchell, G. Lynn; Jones, Lisa A.; Moeschberger, Melvin L.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Twelker, J. Daniel; Zadnik, Karla

    2009-01-01

    Purpose To evaluate refractive error, axial length, and relative peripheral refractive error before, during the year of, and after the onset of myopia in children who became myopic compared with emmetropes. Methods Subjects were 605 children 6 to 14 years of age who became myopic (at least −0.75 D in each meridian) and 374 emmetropic (between −0.25 D and + 1.00 D in each meridian at all visits) children participating between 1995 and 2003 in the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study. Axial length was measured annually by A-scan ultrasonography. Relative peripheral refractive error (the difference between the spherical equivalent cycloplegic autorefraction 30° in the nasal visual field and in primary gaze) was measured using either of two autorefractors (R-1; Canon, Lake Success, NY [no longer manufactured] or WR 5100-K; Grand Seiko, Hiroshima, Japan). Refractive error was measured with the same autorefractor with the subjects under cycloplegia. Each variable in children who became myopic was compared to age-, gender-, and ethnicity-matched model estimates of emmetrope values for each annual visit from 5 years before through 5 years after the onset of myopia. Results In the sample as a whole, children who became myopic had less hyperopia and longer axial lengths than did emmetropes before and after the onset of myopia (4 years before through 5 years after for refractive error and 3 years before through 5 years after for axial length; P < 0.0001 for each year). Children who became myopic had more hyperopic relative peripheral refractive errors than did emmetropes from 2 years before onset through 5 years after onset of myopia (P < 0.002 for each year). The fastest rate of change in refractive error, axial length, and relative peripheral refractive error occurred during the year before onset rather than in any year after onset. Relative peripheral refractive error remained at a consistent level of hyperopia each

  11. Lexical Errors and Accuracy in Foreign Language Writing. Second Language Acquisition

    ERIC Educational Resources Information Center

    del Pilar Agustin Llach, Maria

    2011-01-01

    Lexical errors are a determinant in gaining insight into vocabulary acquisition, vocabulary use and writing quality assessment. Lexical errors are very frequent in the written production of young EFL learners, but they decrease as learners gain proficiency. Misspellings are the most common category, but formal errors give way to semantic-based…

  12. Assessing the Accuracy and Feasibility of a Refractive Error Screening Program Conducted by School Teachers in Pre-Primary and Primary Schools in Thailand

    PubMed Central

    Teerawattananon, Kanlaya; Myint, Chaw-Yin; Wongkittirux, Kwanjai; Teerawattananon, Yot; Chinkulkitnivat, Bunyong; Orprayoon, Surapong; Kusakul, Suwat; Tengtrisorn, Supaporn; Jenchitr, Watanee

    2014-01-01

    Introduction As part of the development of a system for the screening of refractive error in Thai children, this study describes the accuracy and feasibility of establishing a program conducted by teachers. Objective To assess the accuracy and feasibility of screening by teachers. Methods A cross-sectional descriptive and analytical study was conducted in 17 schools in four provinces representing four geographic regions in Thailand. A two-staged cluster sampling was employed to compare the detection rate of refractive error among eligible students between trained teachers and health professionals. Serial focus group discussions were held for teachers and parents in order to understand their attitude towards refractive error screening at schools and the potential success factors and barriers. Results The detection rate of refractive error screening by teachers among pre-primary school children is relatively low (21%) for mild visual impairment but higher for moderate visual impairment (44%). The detection rate for primary school children is high for both levels of visual impairment (52% for mild and 74% for moderate). The focus group discussions reveal that both teachers and parents would benefit from further education regarding refractive errors and that the vast majority of teachers are willing to conduct a school-based screening program. Conclusion Refractive error screening by health professionals in pre-primary and primary school children is not currently implemented in Thailand due to resource limitations. However, evidence suggests that a refractive error screening program conducted in schools by teachers in the country is reasonable and feasible because the detection and treatment of refractive error in very young generations is important and the screening program can be implemented and conducted with relatively low costs. PMID:24926993

  13. Individual Differences in Absolute and Relative Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Maki, Ruth H.; Shields, Micheal; Wheeler, Amanda Easton; Zacchilli, Tammy Lowery

    2005-01-01

    The authors investigated absolute and relative metacomprehension accuracy as a function of verbal ability in college students. Students read hard texts, revised texts, or a mixed set of texts. They then predicted their performance, took a multiple-choice test on the texts, and made posttest judgments about their performance. With hard texts,…

  14. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  15. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario. PMID:23564344

  16. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.

    PubMed

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  17. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    PubMed Central

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  18. The Influence of Tonal and Atonal Contexts on Error Detection Accuracy

    ERIC Educational Resources Information Center

    Groulx, Timothy J.

    2013-01-01

    Music education students ("N" = 21) at a university in the southeastern United States took an error detection test that had been designed for this study to determine the effects of tonal contexts versus atonal contexts on the ability to detect performance errors. The investigator composed 16 melodies, 8 of which were tonal and 8 of which…

  19. An Examination of Error-Related Brain Activity and its Modulation by Error Value in Young Children

    PubMed Central

    Torpey, Dana C.; Hajcak, Greg; Klein, Daniel N.

    2009-01-01

    The error-related negativity (ERN) is an event-related brain potential observed in adults when errors are committed, and which appears to be sensitive to error value. Recent work suggests that the ERN can also be elicited in relatively young children using simple tasks and that ERN amplitude might be sensitive to error value. The current study employed a Go/No-Go paradigm in which 5–7year old children (N=18) earned low or high points for correct responses. Results indicated that errors were associated with an ERN; however, the size was not reliably moderated by error value. PMID:20183731

  20. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  1. Implicationally Related Error Patterns and the Selection of Treatment Targets.

    ERIC Educational Resources Information Center

    Dinnsen, Daniel A.; O'Connor, Kathleen M.

    2001-01-01

    This article compares different claims that have been made concerning acquisition by transitional rule-based derivation theories and by optimality theory. Case studies of children with phonological delays are examined. Error patterns are argued to be implicationally related and optimality theory is shown to offer a principled explanation.…

  2. The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.

    2015-01-01

    Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…

  3. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    PubMed

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. PMID:26903413

  4. Accuracy of the Generalizability-Model Standard Errors for the Percents of Examinees Reaching Standards.

    ERIC Educational Resources Information Center

    Li, Yuan H.; Schafer, William D.

    An empirical study of the Yen (W. Yen, 1997) analytic formula for the standard error of a percent-above-cut [SE(PAC)] was conducted. This formula was derived from variance component information gathered in the context of generalizability theory. SE(PAC)s were estimated by different methods of estimating variance components (e.g., W. Yens…

  5. A Monte Carlo Comparison of Measures of Relative and Absolute Monitoring Accuracy

    ERIC Educational Resources Information Center

    Nietfeld, John L.; Enders, Craig K; Schraw, Gregory

    2006-01-01

    Researchers studying monitoring accuracy currently use two different indexes to estimate accuracy: relative accuracy and absolute accuracy. The authors compared the distributional properties of two measures of monitoring accuracy using Monte Carlo procedures that fit within these categories. They manipulated the accuracy of judgments (i.e., chance…

  6. Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2011-01-01

    Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…

  7. The impact of theoretical errors on velocity estimation and accuracy of duplex grading of carotid stenosis.

    PubMed

    Thomas, Nicholas; Taylor, Peter; Padayachee, Soundrie

    2002-02-01

    Two potential errors in velocity estimation, Doppler angle misalignment and intrinsic spectral broadening (ISB), were determined and used to correct recorded blood velocities obtained from 20 patients (38 bifurcations). The recorded and corrected velocities were used to grade stenoses of greater than 70% using two duplex classification schemes. The first scheme used a peak systolic velocity (PSV) of > 250 cm/s in the internal carotid artery (ICA), and the second a PSV ratio of > 3.4 (ICA PSV/common carotid artery PSV). The "gold standard" was digital subtraction angiography (DSA). The maximum error in velocity estimation due to Doppler angle misalignment was 33 cm/s, but this did not alter sensitivity of stenosis detection. ISB correction caused a reduction in PSV that decreased the sensitivity of the PSV scheme from 65% to 45%. The PSV ratio classification was not affected by ISB errors. Centres using a PSV criterion for grading stenosis should use a fixed Doppler angle and should establish velocity thresholds in-house. PMID:11937281

  8. Error-related negativity predicts reinforcement learning and conflict biases.

    PubMed

    Frank, Michael J; Woroch, Brion S; Curran, Tim

    2005-08-18

    The error-related negativity (ERN) is an electrophysiological marker thought to reflect changes in dopamine when participants make errors in cognitive tasks. Our computational model further predicts that larger ERNs should be associated with better learning to avoid maladaptive responses. Here we show that participants who avoided negative events had larger ERNs than those who were biased to learn more from positive outcomes. We also tested for effects of response conflict on ERN magnitude. While there was no overall effect of conflict, positive learners had larger ERNs when having to choose among two good options (win/win decisions) compared with two bad options (lose/lose decisions), whereas negative learners exhibited the opposite pattern. These results demonstrate that the ERN predicts the degree to which participants are biased to learn more from their mistakes than their correct choices and clarify the extent to which it indexes decision conflict. PMID:16102533

  9. An analysis of pilot error-related aircraft accidents

    NASA Technical Reports Server (NTRS)

    Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.

    1974-01-01

    A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.

  10. Technical Errors May Affect Accuracy of Torque Limiter in Locking Plate Osteosynthesis.

    PubMed

    Savin, David D; Lee, Simon; Bohnenkamp, Frank C; Pastor, Andrew; Garapati, Rajeev; Goldberg, Benjamin A

    2016-01-01

    In locking plate osteosynthesis, proper surgical technique is crucial in reducing potential pitfalls, and use of a torque limiter makes it possible to control insertion torque. We conducted a study of the ways in which different techniques can alter the accuracy of torque limiters. We tested 22 torque limiters (1.5 Nm) for accuracy using hand and power tools under different rotational scenarios: hand power at low and high velocity and drill power at low and high velocity. We recorded the maximum torque reached after each torque-limiting event. Use of torque limiters under hand power at low velocity and high velocity resulted in significantly (P < .0001) different mean (SD) measurements: 1.49 (0.15) Nm and 3.73 (0.79) Nm. Use under drill power at controlled low velocity and at high velocity also resulted in significantly (P < .0001) different mean (SD) measurements: 1.47 (0.14) Nm and 5.37 (0.90) Nm. Maximum single measurement obtained was 9.0 Nm using drill power at high velocity. Locking screw insertion with improper technique may result in higher than expected torque and subsequent complications. For torque limiters, the most reliable technique involves hand power at slow velocity or drill power with careful control of insertion speed until 1 torque-limiting event occurs. PMID:26991576

  11. Assessing Accuracy of Waveform Models against Numerical Relativity Waveforms

    NASA Astrophysics Data System (ADS)

    Pürrer, Michael; LVC Collaboration

    2016-03-01

    We compare currently available phenomenological and effective-one-body inspiral-merger-ringdown models for gravitational waves (GW) emitted from coalescing black hole binaries against a set of numerical relativity waveforms from the SXS collaboration. Simplifications are used in the construction of some waveform models, such as restriction to spins aligned with the orbital angular momentum, no inclusion of higher harmonics in the GW radiation, no modeling of eccentricity and the use of effective parameters to describe spin precession. In contrast, NR waveforms provide us with a high fidelity representation of the ``true'' waveform modulo small numerical errors. To focus on systematics we inject NR waveforms into zero noise for early advanced LIGO detector sensitivity at a moderately optimistic signal-to-noise ratio. We discuss where in the parameter space the above modeling assumptions lead to noticeable biases in recovered parameters.

  12. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  13. On estimating the accuracy of monitoring methods using Bayesian error propagation technique

    NASA Astrophysics Data System (ADS)

    Zonta, Daniele; Bruschetta, Federico; Cappello, Carlo; Zandonini, R.; Pozzi, Matteo; Wang, Ming; Glisic, B.; Inaudi, D.; Posenato, D.; Zhao, Y.

    2014-04-01

    This paper illustrates an application of Bayesian logic to monitoring data analysis and structural condition state inference. The case study is a 260 m long cable-stayed bridge spanning the Adige River 10 km north of the town of Trento, Italy. This is a statically indeterminate structure, having a composite steel-concrete deck, supported by 12 stay cables. Structural redundancy, possible relaxation losses and an as-built condition differing from design, suggest that long-term load redistribution between cables can be expected. To monitor load redistribution, the owner decided to install a monitoring system which combines built-on-site elasto-magnetic and fiber-optic sensors. In this note, we discuss a rational way to improve the accuracy of the load estimate from the EM sensors taking advantage of the FOS information. More specifically, we use a multi-sensor Bayesian data fusion approach which combines the information from the two sensing systems with the prior knowledge, including design information and the outcomes of laboratory calibration. Using the data acquired to date, we demonstrate that combining the two measurements allows a more accurate estimate of the cable load, to better than 50 kN.

  14. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    PubMed Central

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  15. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity.

    PubMed

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  16. Error-Related Negativities During Spelling Judgments Expose Orthographic Knowledge

    PubMed Central

    Harris, Lindsay N.; Perfetti, Charles A.; Rickles, Benjamin

    2014-01-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects’ spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. PMID:24389506

  17. Short-term solar pressure effect and GM uncertainty on TDRS orbital accuracy: A study of the interaction of modeling error with tracking and orbit determination

    NASA Technical Reports Server (NTRS)

    Fang, B. T.

    1979-01-01

    The TDRS was modeled as a combination of a sun-pointing solar panel and earth-pointing plate. Based on this model, explanations are given for the following orbit determination error characteristics: inherent limits in orbital accuracy, the variation of solar pressure induced orbital error with time of the day of epoch, the insensitivity of range-rate orbits to GM error, and optimum bilateration baseline.

  18. Predicting errors from patterns of event-related potentials preceding an overt response.

    PubMed

    Bode, Stefan; Stahl, Jutta

    2014-12-01

    Everyday actions often require fast and efficient error detection and error correction. For this, the brain has to accumulate evidence for errors as soon as it becomes available. This study used multivariate pattern classification techniques for event-related potentials to track the accumulation of error-related brain activity before an overt response was made. Upcoming errors in a digit-flanker task could be predicted after the initiation of an erroneous motor response, ∼90ms before response execution. Channels over motor and parieto-occipital cortices were most important for error prediction, suggesting ongoing perceptual analyses and comparisons of initiated and appropriate motor programmes. Lower response force on error trials as compared to correct trials was observed, which indicates that this early error information was used for attempts to correct for errors before the overt response was made. In summary, our results suggest an early, automatic accumulation of error-related information, providing input for fast correction processes. PMID:25450163

  19. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model

    PubMed Central

    Song, Jin-Myoung; Cho, Jin-Hyoung

    2016-01-01

    Purpose The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D) reconstruction with cone-beam computed tomography (CBCT) scan. Materials and Methods Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left) and 2 vertical rotations (upward/downward). Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion. Results Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05). Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05). Conclusions Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement. PMID:27065238

  20. Error-Related Functional Connectivity of the Habenula in Humans

    PubMed Central

    Ide, Jaime S.; Li, Chiang-Shan R.

    2011-01-01

    Error detection is critical to the shaping of goal-oriented behavior. Recent studies in non-human primates delineated a circuit involving the lateral habenula (LH) and ventral tegmental area (VTA) in error detection. Neurons in the LH increased activity, preceding decreased activity in the VTA, to a missing reward, indicating a feedforward signal from the LH to VTA. In the current study we used connectivity analyses to reveal this pathway in humans. In 59 adults performing a stop signal task during functional magnetic resonance imaging, we identified brain regions showing greater psychophysiological interaction with the habenula during stop error as compared to stop success trials. These regions included a cluster in the VTA/substantia nigra (SN), internal segment of globus pallidus, bilateral amygdala, and insula. Furthermore, using Granger causality and mediation analyses, we showed that the habenula Granger caused the VTA/SN, establishing the direction of this interaction, and that the habenula mediated the functional connectivity between the amygdala and VTA/SN during error processing. To our knowledge, these findings are the first to demonstrate a feedforward influence of the habenula on the VTA/SN during error detection in humans. PMID:21441989

  1. The content of lexical stimuli and self-reported physiological state modulate error-related negativity amplitude.

    PubMed

    Benau, Erik M; Moelter, Stephen T

    2016-09-01

    The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. PMID:27129675

  2. Activation of the human sensorimotor cortex during error-related processing: a magnetoencephalography study.

    PubMed

    Stemmer, Brigitte; Vihla, Minna; Salmelin, Riitta

    2004-05-13

    We studied error-related processing using magnetoencephalography (MEG). Previous event-related potential studies have documented error negativity or error-related negativity after incorrect responses, with a suggested source in the anterior cingulate cortex or supplementary motor area. We compared activation elicited by correct and incorrect trials using auditory and visual choice-reaction time tasks. Source areas showing different activation patterns in correct and error conditions were mainly located in sensorimotor areas, both ipsi- and contralateral to the response, suggesting that activation of sensorimotor circuits accompanies error processing. Additional activation at various other locations suggests a distributed network of brain regions active during error-related processing. Activation specific to incorrect trials tended to occur later in MEG than EEG data, possibly indicating that EEG and MEG detect different neural networks involved in error-related processes. PMID:15147777

  3. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... underpayment of tax on which the fraud penalty set forth in section 6663 is imposed. (b) Amount of penalty—(1... in 26 CFR part 1 revised April 1, 1995) apply to returns the due date of which (determined without....6662-3 (as contained in 26 CFR part 1 revised April 1, 1995) relating to those penalties will apply...

  4. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... portion of an underpayment of tax on which the fraud penalty set forth in section 6663 is imposed. (b... contained in 26 CFR part 1 revised April 1, 1995) apply to returns the due date of which (determined without....6662-3 (as contained in 26 CFR part 1 revised April 1, 1995) relating to those penalties will apply...

  5. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... portion of an underpayment of tax on which the fraud penalty set forth in section 6663 is imposed. (b... contained in 26 CFR part 1 revised April 1, 1995) apply to returns the due date of which (determined without....6662-3 (as contained in 26 CFR part 1 revised April 1, 1995) relating to those penalties will apply...

  6. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... portion of an underpayment of tax on which the fraud penalty set forth in section 6663 is imposed. (b... contained in 26 CFR part 1 revised April 1, 1995) apply to returns the due date of which (determined without....6662-3 (as contained in 26 CFR part 1 revised April 1, 1995) relating to those penalties will apply...

  7. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  8. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  9. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations.

    PubMed

    Seoane, Fernando; Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar; Ward, Leigh C

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  10. Relation between minimum-error discrimination and optimum unambiguous discrimination

    SciTech Connect

    Qiu Daowen; Li Lvjun

    2010-09-15

    In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating some states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.

  11. A high-accuracy roundness measurement for cylindrical components by a morphological filter considering eccentricity, probe offset, tip head radius and tilt error

    NASA Astrophysics Data System (ADS)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Zhou, Tong; Kuang, Ye

    2016-08-01

    A morphological filter is proposed to obtain a high-accuracy roundness measurement based on the four-parameter roundness measurement model, which takes into account eccentricity, probe offset, probe tip head radius and tilt error. This paper analyses the sample angle deviations caused by the four systematic errors to design a morphological filter based on the distribution of the sample angle. The effectiveness of the proposed method is verified through simulations and experiments performed with a roundness measuring machine. Compared to the morphological filter with the uniform sample angle, the accuracy of the roundness measurement can be increased by approximately 0.09 μm using the morphological filter with a non-uniform sample angle based on the four-parameter roundness measurement model, when eccentricity is above 16 μm, probe offset is approximately 1000 μm, tilt error is approximately 1″, the probe tip head radius is 1 mm and the cylindrical component radius is approximately 37 mm. The accuracy and reliability of roundness measurements are improved by using the proposed method for cylindrical components with a small radius, especially if the eccentricity and probe offset are large, and the tilt error and probe tip head radius are small. The proposed morphological filter method can be used for precision and ultra-precision roundness measurements, especially for functional assessments of roundness profiles.

  12. Decreasing Errors in Reading-Related Matching to Sample Using a Delayed-Sample Procedure

    ERIC Educational Resources Information Center

    Doughty, Adam H.; Saunders, Kathryn J.

    2009-01-01

    Two men with intellectual disabilities initially demonstrated intermediate accuracy in two-choice matching-to-sample (MTS) procedures. A printed-letter identity MTS procedure was used with 1 participant, and a spoken-to-printed-word MTS procedure was used with the other participant. Errors decreased substantially under a delayed-sample procedure,…

  13. Effect of radiometric errors on accuracy of temperature-profile measurement by spectral scanning using absorption-emission pyrometry

    NASA Technical Reports Server (NTRS)

    Buchele, D. R.

    1972-01-01

    The spectral-scanning method may be used to determine the temperature profile of a jet- or rocket-engine exhaust stream by measurements of gas radiation and transmittance, at two or more wavelengths. A single, fixed line of sight is used, using immobile radiators outside of the gas stream, and there is no interference with the flow. At least two sets of measurements are made, each set consisting of the conventional three radiometric measurements of absorption-emission pyrometry, but each set is taken over a different spectral interval that gives different weight to the radiation from a different portion of the optical path. Thereby, discrimination is obtained with respect to location along the path. A given radiometric error causes an error in computed temperatures. The ratio between temperature error and radiometric error depends on profile shape, path length, temperature level, and strength of line absorption, and the absorption coefficient and its temperature dependency. These influence the choice of wavelengths, for any given gas. Conditions for minimum temperature error are derived. Numerical results are presented for a two-wavelength measurement on a family of profiles that may be expected in a practical case of hydrogen-oxygen combustion. Under favorable conditions, the fractional error in temperature approximates the fractional error in radiant-flux measurement.

  14. Accuracy assessment system and operation

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.

  15. Research Into the Collimation and Horizontal Axis Errors Influence on the Z+F Laser Scanner Accuracy of Verticality Measurement

    NASA Astrophysics Data System (ADS)

    Sawicki, J.; Kowalczyk, M.

    2016-06-01

    Aim of this study was to appoint values of collimation and horizontal axis errors of the laser scanner ZF 5006h owned by Department of Geodesy and Cartography, Warsaw University of Technology, and then to determine the effect of those errors on the results of measurements. An experiment has been performed, involving measurement of the test field , founded in the Main Hall of the Main Building of the Warsaw University of Technology, during which values of instrumental errors of interest were determined. Then, an universal computer program that automates the proposed algorithm and capable of applying corrections to measured target coordinates or even entire point clouds from individual stations, has been developed.

  16. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Error claims related to regulatory fees. 1.1167... of Statutory Charges and Procedures for Payment § 1.1167 Error claims related to regulatory fees. (a... to ARINQUIRIES@fcc.gov. (b) The filing of a petition for reconsideration or an application for...

  17. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Error claims related to regulatory fees. 1.1167... of Statutory Charges and Procedures for Payment § 1.1167 Error claims related to regulatory fees. (a... to ARINQUIRIES@fcc.gov. (b) The filing of a petition for reconsideration or an application for...

  18. Using brain potentials to understand prism adaptation: the error-related negativity and the P300.

    PubMed

    MacLean, Stephane J; Hassall, Cameron D; Ishigami, Yoko; Krigolson, Olav E; Eskes, Gail A

    2015-01-01

    Prism adaptation (PA) is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN)-a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuomotor responding is shifted to the opposite direction of that initially induced by the prisms. This visuomotor aftereffect has been used to study visuomotor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided) space. In order to optimize PA's use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs) were recorded at the termination of each reach (screen-touch), then binned according to accuracy (hit vs. miss) and phase of exposure block (early, middle, late). Results show that two ERP components were evoked by screen-touch: an error-related negativity (ERN), and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN. PMID:26124715

  19. Using brain potentials to understand prism adaptation: the error-related negativity and the P300

    PubMed Central

    MacLean, Stephane J.; Hassall, Cameron D.; Ishigami, Yoko; Krigolson, Olav E.; Eskes, Gail A.

    2015-01-01

    Prism adaptation (PA) is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN)—a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuomotor responding is shifted to the opposite direction of that initially induced by the prisms. This visuomotor aftereffect has been used to study visuomotor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided) space. In order to optimize PA's use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs) were recorded at the termination of each reach (screen-touch), then binned according to accuracy (hit vs. miss) and phase of exposure block (early, middle, late). Results show that two ERP components were evoked by screen-touch: an error-related negativity (ERN), and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN. PMID:26124715

  20. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-06-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  1. Colloquium: Quantum root-mean-square error and measurement uncertainty relations

    NASA Astrophysics Data System (ADS)

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F.

    2014-10-01

    Recent years have witnessed a controversy over Heisenberg's famous error-disturbance relation. Here the conflict is resolved by way of an analysis of the possible conceptualizations of measurement error and disturbance in quantum mechanics. Two approaches to adapting the classic notion of root-mean-square error to quantum measurements are discussed. One is based on the concept of a noise operator; its natural operational content is that of a mean deviation of the values of two observables measured jointly, and thus its applicability is limited to cases where such joint measurements are available. The second error measure quantifies the differences between two probability distributions obtained in separate runs of measurements and is of unrestricted applicability. We show that there are no nontrivial unconditional joint-measurement bounds for state-dependent errors in the conceptual framework discussed here, while Heisenberg-type measurement uncertainty relations for state-independent errors have been proven.

  2. Error-related electromyographic activity over the corrugator supercilii is associated with neural performance monitoring.

    PubMed

    Elkins-Brown, Nathaniel; Saunders, Blair; Inzlicht, Michael

    2016-02-01

    Emerging research in social and affective neuroscience has implicated a role for affect and motivation in performance monitoring and cognitive control. No study, however, has investigated whether facial electromyography (EMG) over the corrugator supercilii-a measure associated with negative affect and the exertion of effort-is related to neural performance monitoring. Here, we explored these potential relationships by simultaneously measuring the error-related negativity, error positivity (Pe), and facial EMG over the corrugator supercilii muscle during a punished, inhibitory control task. We found evidence for increased facial EMG activity over the corrugator immediately following error responses, and this activity was related to the Pe for both between- and within-subject analyses. These results are consistent with the idea that early, avoidance-motivated processes are associated with performance monitoring, and that such processes may also be related to orienting toward errors, the emergence of error awareness, or both. PMID:26470645

  3. [Learning from errors after a care-related adverse event].

    PubMed

    Richard, Christian; Pibarot, Marie-Laure; Zantman, Françoise

    2016-04-01

    The mobilisation of all health professionals with regard to the detection and analysis of care-related adverse events is an essential element in the improvement of the safety of care. This approach is required by the authorities and justifiably expected by users. PMID:27085926

  4. Error-Related Activity and Correlates of Grammatical Plasticity

    PubMed Central

    Davidson, Doug J.; Indefrey, Peter

    2011-01-01

    Cognitive control involves not only the ability to manage competing task demands, but also the ability to adapt task performance during learning. This study investigated how violation-, response-, and feedback-related electrophysiological (EEG) activity changes over time during language learning. Twenty-two Dutch learners of German classified short prepositional phrases presented serially as text. The phrases were initially presented without feedback during a pre-test phase, and then with feedback in a training phase on two separate days spaced 1 week apart. The stimuli included grammatically correct phrases, as well as grammatical violations of gender and declension. Without feedback, participants’ classification was near chance and did not improve over trials. During training with feedback, behavioral classification improved and violation responses appeared to both types of violation in the form of a P600. Feedback-related negative and positive components were also present from the first day of training. The results show changes in the electrophysiological responses in concert with improving behavioral discrimination, suggesting that the activity is related to grammar learning. PMID:21960979

  5. Spouses' Effectiveness as End-of-Life Health Care Surrogates: Accuracy, Uncertainty, and Errors of Overtreatment or Undertreatment

    ERIC Educational Resources Information Center

    Moorman, Sara M.; Carr, Deborah

    2008-01-01

    Purpose: We document the extent to which older adults accurately report their spouses' end-of-life treatment preferences, in the hypothetical scenarios of terminal illness with severe physical pain and terminal illness with severe cognitive impairment. We investigate the extent to which accurate reports, inaccurate reports (i.e., errors of…

  6. Error Self-Correction and Spelling: Improving the Spelling Accuracy of Secondary Students with Disabilities in Written Expression

    ERIC Educational Resources Information Center

    Viel-Ruma, Kim; Houchins, David; Fredrick, Laura

    2007-01-01

    In order to improve the spelling performance of high school students with deficits in written expression, an error self-correction procedure was implemented. The participants were two tenth-grade students and one twelfth-grade student in a program for individuals with learning disabilities. Using an alternating treatments design, the effect of…

  7. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  8. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  9. Stress regulation and cognitive control: evidence relating cortisol reactivity and neural responses to errors.

    PubMed

    Compton, Rebecca J; Hofheimer, Julia; Kazinka, Rebecca

    2013-03-01

    In this study, we tested the relationship between error-related signals of cognitive control and cortisol reactivity, investigating the hypothesis of common systems for cognitive and emotional self-regulation. Eighty-three participants completed a Stroop task while electroencephalography (EEG) was recorded. Three error-related indices were derived from the EEG: the error-related negativity (ERN), error positivity (Pe), and error-related alpha suppression (ERAS). Pre- and posttask salivary samples were assayed for cortisol, and cortisol change scores were correlated with the EEG variables. Better error-correct differentiation in the ERN predicted less cortisol increase during the task, whereas greater ERAS predicted greater cortisol increase during the task; the Pe was not correlated with cortisol changes. We concluded that an enhanced ERN, part of an adaptive cognitive control system, predicts successful stress regulation. In contrast, an enhanced ERAS response may reflect error-related arousal that is not adaptive. The results support the concept of overlapping systems for cognitive and emotional self-regulation. PMID:23055094

  10. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  11. The error-related negativity (ERN) and psychopathology: Toward an Endophenotype

    PubMed Central

    Olvet, Doreen M.; Hajcak, Greg

    2008-01-01

    The ERN is a negative deflection in the event-related potential that peaks approximately 50 ms after the commission of an error. The ERN is thought to reflect early error-processing activity of the anterior cingulate cortex (ACC). First, we review current functional, neurobiological, and developmental data on the ERN. Next, the ERN is discussed in terms of three psychiatric disorders characterized by abnormal response monitoring: anxiety disorders, depression, and substance abuse. These data indicate that increased and decreased error-related brain activity is associated with the internalizing and externalizing dimensions of psychopathology, respectively. Recent data further suggest that abnormal error-processing indexed by the ERN indexes trait- but not state-related symptoms, especially related to anxiety. Overall, these data point to utility of ERN in studying risk for psychiatric disorders, and are discussed in terms of the endophenotype construct. PMID:18694617

  12. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    ERIC Educational Resources Information Center

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  13. Stress regulation and cognitive control: evidence relating cortisol reactivity and neural responses to errors

    PubMed Central

    Hofheimer, Julia; Kazinka, Rebecca

    2012-01-01

    In this study, we tested the relationship between error-related signals of cognitive control and cortisol reactivity, investigating the hypothesis of common systems for cognitive and emotional self-regulation. Eighty-three participants completed a Stroop task while electroencephalography (EEG) was recorded. Three error-related indices were derived from the EEG: the error-related negativity (ERN), error positivity (Pe), and error-related alpha suppression (ERAS). Pre- and posttask salivary samples were assayed for cortisol, and cortisol change scores were correlated with the EEG variables. Better error–correct differentiation in the ERN predicted less cortisol increase during the task, whereas greater ERAS predicted greater cortisol increase during the task; the Pe was not correlated with cortisol changes. We concluded that an enhanced ERN, part of an adaptive cognitive control system, predicts successful stress regulation. In contrast, an enhanced ERAS response may reflect error-related arousal that is not adaptive. The results support the concept of overlapping systems for cognitive and emotional self-regulation. PMID:23055094

  14. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  15. Evaluation of Relative Geometric Accuracy of Terrasar-X by Pixel Matching Methodology

    NASA Astrophysics Data System (ADS)

    Nonaka, T.; Asaka, T.; Iwashita, K.

    2016-06-01

    Recently, high-resolution commercial SAR satellites with several meters of resolutions are widely utilized for various applications and disaster monitoring is one of the commonly applied areas. The information about the flooding situation and ground displacement was rapidly announced to the public after the Great East Japan Earthquake 2011. One of the studies reported the displacement in Tohoku region by the pixel matching methodology using both pre- and post- event TerraSAR-X data, and the validated accuracy was about 30 cm at the GEONET reference points. In order to discuss the spatial distribution of the displacement, we need to evaluate the relative accuracy of the displacement in addition to the absolute accuracy. In the previous studies, our study team evaluated the absolute 2D geo-location accuracy of the TerraSAR-X ortho-rectified EEC product for both flat and mountain areas. Therefore, the purpose of the current study was to evaluate the spatial and temporal relative geo-location accuracies of the product by considering the displacement of the fixed point as the relative geo-location accuracy. Firstly, by utilizing TerraSAR-X StripMap dataset, the pixel matching method for estimating the displacement with sub-pixel level was developed. Secondly, the validity of the method was confirmed by comparing with GEONET data. We confirmed that the accuracy of the displacement for X and Y direction was in agreement with the previous studies. Subsequently, the methodology was applied to 20 pairs of data set for areas of Tokyo Ota-ku and Kawasaki-shi, and the displacement of each pair was evaluated. It was revealed that the time series displacement rate had the seasonal trend and seemed to be related to atmospheric delay.

  16. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder

    ERIC Educational Resources Information Center

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…

  17. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  18. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  19. Precision error in dual-photon absorptiometry related to source age

    SciTech Connect

    Ross, P.D.; Wasnich, R.D.; Vogel, J.M.

    1988-02-01

    An average, variable precision error of up to 6% related to source age was observed for dual-photon absorptiometry of the spine in a longitudinal study of bone mineral content involving 393 women. Application of a software correction for source decay compensated for only a portion of this error. The authors conclude that measurement of bone-loss rates using serial dual-photon bone mineral measurements must be interpreted with caution.

  20. The damaging effect of confirming feedback on the relation between eyewitness certainty and identification accuracy.

    PubMed

    Bradfield, Amy L; Wells, Gary L; Olson, Elizabeth A

    2002-02-01

    The authors investigated eyewitnesses' retrospective certainty (see G. L. Wells & A. L. Bradfield, 1999). The authors hypothesized that extemal influence from the lineup administrator would damage the certainty-accuracy relation by inflating the retrospective certainty of inaccurate eyewitnesses more than that of accurate eyewitnesses (N = 245). Two variables were manipulated: eyewitness accuracy (through the presence or absence of the culprit in the lineup) and feedback (confirming vs. control). Confirming feedback inflated retrospective certainty more for inaccurate eyewitnesses than for accurate eyewitnesses, significantly reducing the certainty-accuracy relation (from r = .58 in the control condition to r = .37 in the confirming feedback condition). Double-blind testing is recommended for lineups to prevent these external influences on eyewitnesses. PMID:11916205

  1. Crying tapir: the functionality of errors and accuracy in predator recognition in two neotropical high-canopy primates.

    PubMed

    Mourthé, Ítalo; Barnett, Adrian A

    2014-01-01

    Predation is often considered to be a prime driver in primate evolution, but, as predation is rarely observed in nature, little is known of primate antipredator responses. Time-limited primates should be highly discerning when responding to predators, since time spent in vigilance and avoidance behaviour may supplant other activities. We present data from two independent studies describing and quantifying the frequency, nature and duration of predator-linked behaviours in 2 high-canopy primates, Ateles belzebuth and Cacajao ouakary. We introduce the concept of 'pseudopredators' (harmless species whose appearance is sufficiently similar to that of predators to elicit antipredator responses) and predict that changes in behaviour should increase with risk posed by a perceived predator. We studied primate group encounters with non-primate vertebrates across 14 (Ateles) and 19 (Cacajao) months in 2 undisturbed Amazonian forests. Although preliminary, data on both primates revealed that they distinguished the potential predation capacities of other species, as predicted. They appeared to differentiate predators from non-predators and distinguished when potential predators were not an immediate threat, although they reacted erroneously to pseudopredators, on average in about 20% of the responses given toward other vertebrates. Reacting to pseudopredators would be interesting since, in predation, one error can be fatal to the prey. PMID:25791040

  2. Spatial and Temporal Characteristics of Error-Related Activity in the Human Brain

    PubMed Central

    Miezin, Francis M.; Nelson, Steven M.; Dubis, Joseph W.; Dosenbach, Nico U.F.; Schlaggar, Bradley L.; Petersen, Steven E.

    2015-01-01

    A number of studies have focused on the role of specific brain regions, such as the dorsal anterior cingulate cortex during trials on which participants make errors, whereas others have implicated a host of more widely distributed regions in the human brain. Previous work has proposed that there are multiple cognitive control networks, raising the question of whether error-related activity can be found in each of these networks. Thus, to examine error-related activity broadly, we conducted a meta-analysis consisting of 12 tasks that included both error and correct trials. These tasks varied by stimulus input (visual, auditory), response output (button press, speech), stimulus category (words, pictures), and task type (e.g., recognition memory, mental rotation). We identified 41 brain regions that showed a differential fMRI BOLD response to error and correct trials across a majority of tasks. These regions displayed three unique response profiles: (1) fast, (2) prolonged, and (3) a delayed response to errors, as well as a more canonical response to correct trials. These regions were found mostly in several control networks, each network predominantly displaying one response profile. The one exception to this “one network, one response profile” observation is the frontoparietal network, which showed prolonged response profiles (all in the right hemisphere), and fast profiles (all but one in the left hemisphere). We suggest that, in the place of a single localized error mechanism, these findings point to a large-scale set of error-related regions across multiple systems that likely subserve different functions. PMID:25568119

  3. Error-Related Brain Activity in Young Children: Associations with Parental Anxiety and Child Temperamental Negative Emotionality

    ERIC Educational Resources Information Center

    Torpey, Dana C.; Hajcak, Greg; Kim, Jiyon; Kujawa, Autumn J.; Dyson, Margaret W.; Olino, Thomas M.; Klein, Daniel N.

    2013-01-01

    Background: There is increasing interest in error-related brain activity in anxiety disorders. The error-related negativity (ERN) is a negative deflection in the event-related potential approximately 50 [milliseconds] after errors compared to correct responses. Recent studies suggest that the ERN may be a biomarker for anxiety, as it is positively…

  4. Senior High School Students' Errors on the Use of Relative Words

    ERIC Educational Resources Information Center

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  5. A cerebellar thalamic cortical circuit for error-related cognitive control

    PubMed Central

    Ide, Jaime S.; Li, Chiang-shan Ray

    2010-01-01

    Error detection and behavioral adjustment are core components of cognitive control. Numerous studies have focused on the anterior cingulate cortex (ACC) as a critical locus of this executive function. Our previous work showed greater activation in the dorsal ACC and subcortical structures during error detection, and activation in the ventrolateral prefrontal cortex (VLPFC) during post-error slowing (PES) in a stop signal task (SST). However, the extent of error-related cortical or subcortical activation across subjects was not correlated with VLPFC activity during PES. So then, what causes VLPFC activation during PES? To address this question, we employed Granger causality mapping (GCM) and identified regions that Granger caused VLPFC activation in 54 adults performing the SST during fMRI. These brain regions, including the supplementary motor area (SMA), cerebellum, a pontine region, and medial thalamus, represent potential targets responding to errors in a way that could influence VLPFC activation. In confirmation of this hypothesis, the error-related activity of these regions correlated with VLPFC activation during PES, with the cerebellum showing the strongest association. The finding that cerebellar activation Granger causes prefrontal activity during behavioral adjustment supports a cerebellar function in cognitive control. Furthermore, multivariate GCA described the “flow of information” across these brain regions. Through connectivity with the thalamus and SMA, the cerebellum mediates error and post-error processing in accord with known anatomical projections. Taken together, these new findings highlight the role of the cerebello-thalamo-cortical pathway in an executive function that has heretofore largely been ascribed to the anterior cingulate-prefrontal cortical circuit. PMID:20656038

  6. Altered error-related brain activity in youth with major depression.

    PubMed

    Ladouceur, Cecile D; Slifka, John S; Dahl, Ronald E; Birmaher, Boris; Axelson, David A; Ryan, Neal D

    2012-07-01

    Depression is associated with impairments in cognitive control including action monitoring processes, which involve the detection and processing of erroneous responses in order to adjust behavior. Although numerous studies have reported altered error-related brain activity in depressed adults, relatively little is known about age-related changes in error-related brain activity in depressed youth. This study focuses on the error-related negativity (ERN), a negative deflection in the event-related potential (ERP) that is maximal approximately 50ms following errors. High-density ERPs were examined following responses on a flanker task in 24 youth diagnosed with MDD and 14 low-risk healthy controls (HC). Results indicate that compared to HC, MDD youth had significantly smaller ERN amplitudes and did not exhibit the normative increases in ERN amplitudes as a function of age. Also, ERN amplitudes were similar in depressed youth with and without comorbid anxiety. These results suggest that depressed youth exhibit different age-related changes in brain activity associated with action monitoring processes. Findings are discussed in terms of existing work on the neural correlates of action monitoring and depression and the need for longitudinal research studies investigating the development of neural systems underlying action monitoring in youth diagnosed with and at risk for depression. PMID:22669036

  7. Accuracy of Noncycloplegic Retinoscopy, Retinomax Autorefractor, and SureSight Vision Screener for Detecting Significant Refractive Errors

    PubMed Central

    Kulp, Marjean Taylor; Ying, Gui-shuang; Huang, Jiayan; Maguire, Maureen; Quinn, Graham; Ciner, Elise B.; Cyert, Lynn A.; Orel-Bixler, Deborah A.; Moore, Bruce D.

    2014-01-01

    Purpose. To evaluate, by receiver operating characteristic (ROC) analysis, the ability of noncycloplegic retinoscopy (NCR), Retinomax Autorefractor (Retinomax), and SureSight Vision Screener (SureSight) to detect significant refractive errors (RE) among preschoolers. Methods. Refraction results of eye care professionals using NCR, Retinomax, and SureSight (n = 2588) and of nurse and lay screeners using Retinomax and SureSight (n = 1452) were compared with masked cycloplegic retinoscopy results. Significant RE was defined as hyperopia greater than +3.25 diopters (D), myopia greater than 2.00 D, astigmatism greater than 1.50 D, and anisometropia greater than 1.00 D interocular difference in hyperopia, greater than 3.00 D interocular difference in myopia, or greater than 1.50 D interocular difference in astigmatism. The ability of each screening test to identify presence, type, and/or severity of significant RE was summarized by the area under the ROC curve (AUC) and calculated from weighted logistic regression models. Results. For detection of each type of significant RE, AUC of each test was high; AUC was better for detecting the most severe levels of RE than for all REs considered important to detect (AUC 0.97–1.00 vs. 0.92–0.93). The area under the curve of each screening test was high for myopia (AUC 0.97–0.99). Noncycloplegic retinoscopy and Retinomax performed better than SureSight for hyperopia (AUC 0.92–0.99 and 0.90–0.98 vs. 0.85–0.94, P ≤ 0.02), Retinomax performed better than NCR for astigmatism greater than 1.50 D (AUC 0.95 vs. 0.90, P = 0.01), and SureSight performed better than Retinomax for anisometropia (AUC 0.85–1.00 vs. 0.76–0.96, P ≤ 0.07). Performance was similar for nurse and lay screeners in detecting any significant RE (AUC 0.92–1.00 vs. 0.92–0.99). Conclusions. Each test had a very high discriminatory power for detecting children with any significant RE. PMID:24481262

  8. Accuracy of velocities from repeated GPS surveys: relative positioning is concerned

    NASA Astrophysics Data System (ADS)

    Duman, Huseyin; Ugur Sanli, D.

    2016-04-01

    Over more than a decade, researchers have been interested in studying the accuracy of GPS positioning solutions. Recently, reporting the accuracy of GPS velocities has been added to this. Researchers studying landslide motion, tectonic motion, uplift, sea level rise, and subsidence still report results from GPS experiments in which repeated GPS measurements from short sessions are used. This motivated some other researchers to study the accuracy of GPS deformation rates/velocities from various repeated GPS surveys. In one of the efforts, the velocity accuracy was derived from repeated GPS static surveys using short observation sessions and Precise Point Positioning mode of GPS software. Velocities from short GPS sessions were compared with the velocities from 24 h sessions. The accuracy of velocities was obtained using statistical hypothesis testing and quantifying the accuracy of least squares estimation models. The results reveal that 45-60 % of the horizontal and none of the vertical solutions comply with the results from 24 h solutions. We argue that this case in which the data was evaluated using PPP should also apply to the case in which the data belonging to long GPS base lengths is processed using fundamental relative point positioning. To test this idea we chose the two IGS stations ANKR and NICO and derive their velocities from the reference stations held fixed in the stable EURASIAN plate. The University of Bern's GNSS software BERNESE was used to produce relative positioning solutions, and the results are compared with those of GIPSY/OASIS II PPP results. First impressions indicate that it is worth designing a global experiment and test these ideas in detail.

  9. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits.

    PubMed

    Maurer, J Michael; Steele, Vaughn R; Cope, Lora M; Vincent, Gina M; Stephen, Julia M; Calhoun, Vince D; Kiehl, Kent A

    2016-06-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n=100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  10. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits

    PubMed Central

    Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  11. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip B. (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD ) brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. in addition, a comparison of observations of RH during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  12. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. In addition, a comparison of observations of RE during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  13. Age-Related Differences in the Accuracy of Web Query-Based Predictions of Influenza-Like Illness

    PubMed Central

    Domnich, Alexander; Panatto, Donatella; Signori, Alessio; Lai, Piero Luigi; Gasparini, Roberto; Amicizia, Daniela

    2015-01-01

    Background Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI). However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age. Methods Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed. Results Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929). However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0–4 and 25–44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15–24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing. Conclusions The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI. PMID:26011418

  14. Investigation of technology needs for avoiding helicopter pilot error related accidents

    NASA Technical Reports Server (NTRS)

    Chais, R. I.; Simpson, W. E.

    1985-01-01

    Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.

  15. Error-related brain activation during a Go/NoGo response inhibition task.

    PubMed

    Menon, V; Adleman, N E; White, C D; Glover, G H; Reiss, A L

    2001-03-01

    Inhibitory control and performance monitoring are critical executive functions of the human brain. Lesion and imaging studies have shown that the inferior frontal cortex plays an important role in inhibition of inappropriate response. In contrast, specific brain areas involved in error processing and their relation to those implicated in inhibitory control processes are unknown. In this study, we used a random effects model to investigate error-related brain activity associated with failure to inhibit response during a Go/NoGo task. Error-related brain activation was observed in the rostral aspect of the right anterior cingulate (BA 24/32) and adjoining medial prefrontal cortex, the left and right insular cortex and adjoining frontal operculum (BA 47) and left precuneus/posterior cingulate (BA 7/31/29). Brain activation related to response inhibition and competition was observed bilaterally in the dorsolateral prefrontal cortex (BA 9/46), pars triangularis region of the inferior frontal cortex (BA 45/47), premotor cortex (BA 6), inferior parietal lobule (BA 39), lingual gyrus and the caudate, as well as in the right dorsal anterior cingulate cortex (BA 24). These findings provide evidence for a distributed error processing system in the human brain that overlaps partially, but not completely, with brain regions involved in response inhibition and competition. In particular, the rostal anterior cingulate and posterior cingulate/precuneus as well as the left and right anterior insular cortex were activated only during error processing, but not during response competition, inhibition, selection, or execution. Our results also suggest that the brain regions involved in the error processing system overlap with brain areas implicated in the formulation and execution of articulatory plans. PMID:11170305

  16. Tracing Error-Related Knowledge in Interview Data: Negative Knowledge in Elder Care Nursing

    ERIC Educational Resources Information Center

    Gartmeier, Martin; Gruber, Hans; Heid, Helmut

    2010-01-01

    This paper empirically investigates elder care nurses' negative knowledge. This form of experiential knowledge is defined as the outcome of error-related learning processes, focused on how something is not, on what not to do in certain situations or on deficits in one's knowledge or skills. Besides this definition, we presume the existence of…

  17. Age-related Changes in Error Processing in Young Children: A School-based Investigation

    PubMed Central

    Grammer, Jennie K.; Carrasco, Melisa; Gehring, William J.; Morrison, Frederick J.

    2014-01-01

    Growth in executive functioning skills (EF) play a role children’s academic success, and the transition to elementary school is an important time for the development of these abilities. Despite this, evidence concerning the development of the ERP components linked to EF, including the error-related negativity (ERN) and the error positivity (Pe), over this period is inconclusive. Data were recorded in a school setting from 3–7 year-old children (N=96, mean age=5 years 11 months) as they performed a Go/No-Go task. Results revealed the presence of the ERN and Pe on error relative to correct trials at all age levels. Older children showed increased response inhibition as evidenced by faster, more accurate responses. Although developmental changes in the ERN were not identified, the Pe increased with age. In addition, girls made fewer mistakes and showed elevated Pe amplitudes relative to boys. Based on a representative school-based sample, findings indicate that the ERN is present in children as young as 3, and that development can be seen in the Pe between ages 3–7. Results varied as a function of gender, providing insight into the range of factors associated with developmental changes in the complex relations between behavioral and electrophysiological measures of error processing. PMID:24631799

  18. Social Errors in Four Cultures: Evidence about Universal Forms of Social Relations.

    ERIC Educational Resources Information Center

    Fiske, Alan Page

    1993-01-01

    To test the cross-cultural generality of relational-models theory, 4 studies with 70 adults examined social errors of substitution of persons for Bengali, Korean, Chinese, and Vai (Liberia and Sierra Leone) subjects. In all four cultures, people tend to substitute someone with whom they have the same basic relationship. (SLD)

  19. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  20. EEG-based decoding of error-related brain activity in a real-world driving task

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Chavarriaga, R.; Khaliliardali, Z.; Gheorghe, L.; Iturrate, I.; Millán, J. d. R.

    2015-12-01

    Objectives. Recent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver’s intended turning direction before reaching road intersections. Approach. We executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject’s intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests. Results. An average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver’s intention coincides with the advice provided by the driving assistant in a real car. Significance. The study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver’s error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.

  1. Increasing the saliency of behavior-consequence relations for children with autism who exhibit persistent errors.

    PubMed

    Fisher, Wayne W; Pawich, Tamara L; Dickes, Nitasha; Paden, Amber R; Toussaint, Karen

    2014-01-01

    Some children with autism spectrum disorders (ASD) display persistent errors that are not responsive to commonly used prompting or error-correction strategies; one possible reason for this is that the behavior-consequence relations are not readily discriminable (Davison & Nevin, 1999). In this study, we increased the discriminability of the behavior-consequence relations in conditional-discrimination acquisition tasks for 3 children with ASD using schedule manipulations in concert with a unique visual display designed to increase the saliency of the differences between consequences in effect for correct responding and for errors. A multiple baseline design across participants was used to show that correct responding increased for all participants, and, after 1 or more exposures to the intervention, correct responding persisted to varying degrees across participants when the differential reinforcement baseline was reintroduced to assess maintenance. These findings suggest that increasing the saliency of behavior-consequence relations may help to increase correct responding in children with ASD who exhibit persistent errors. PMID:25311794

  2. Error-Related Brain Activity in Extraverts: Evidence for Altered Response Monitoring in Social Context

    PubMed Central

    Fishman, Inna; Ng, Rowena

    2013-01-01

    While the personality trait of extraversion has been linked to enhanced reward sensitivity and its putative neural correlates, little is known about whether extraverts’ neural circuits are particularly sensitive to social rewards, given their preference for social engagement and social interactions. Using event-related potentials (ERPs), this study examined the relationship between the variation on the extraversion spectrum and a feedback-related ERP component (the error-related negativity or ERN) known to be sensitive to the value placed on errors and reward. Participants completed a forced-choice task, in which either rewarding or punitive feedback regarding their performance was provided, through either social (facial expressions) or non-social (verbal written) mode. The ERNs elicited by error trials in the social – but not in non-social – blocks were found to be associated with the extent of one’s extraversion. However, the directionality of the effect was in contrast with the original prediction: namely, extraverts exhibited smaller ERNs than introverts during social blocks, whereas all participants produced similar ERNs in the non-social, verbal feedback condition. This finding suggests that extraverts exhibit diminished engagement in response monitoring – or find errors to be less salient – in the context of social feedback, perhaps because they find social contexts more predictable and thus more pleasant and less anxiety provoking. PMID:23454520

  3. Error-related brain activity reveals self-centric motivation: culture matters.

    PubMed

    Kitayama, Shinobu; Park, Jiyoung

    2014-02-01

    To secure the interest of the personal self (vs. social others) is considered a fundamental human motive, but the nature of the motivation to secure the self-interest is not well understood. To address this issue, we assessed electrocortical responses of European Americans and Asians as they performed a flanker task while instructed to earn as many reward points as possible either for the self or for their same-sex friend. For European Americans, error-related negativity (ERN)-an event-related-potential component contingent on error responses--was significantly greater in the self condition than in the friend condition. Moreover, post-error slowing--an index of cognitive control to reduce errors--was observed in the self condition but not in the friend condition. Neither of these self-centric effects was observed among Asians, consistent with prior cross-cultural behavioral evidence. Interdependent self-construal mediated the effect of culture on the ERN self-centric effect. Our findings provide the first evidence for a neural correlate of self-centric motivation, which becomes more salient outside of interdependent social relations. PMID:23398181

  4. Relative importance of the error sources in Wiener restoration of scintigrams.

    PubMed

    Penney, B C; Glick, S J; King, M A

    1990-01-01

    Through simulation studies, the relative importance of three error sources in Wiener filtering as applied to scintigrams is quantified. The importance of these error sources has been quantified using the percentage changed in squared error (compared to that of an image restored using an ideal Wiener filter) which is caused by estimating one of three factors in the Wiener filter. Estimating the noise power spectrum using the total image count produced to appreciable change in the squared error (less than 1%). Estimating the power spectrum of the true image from that of the degraded image produced small to moderate increases in the squared error (4-139%). In scintigraphic imaging, the modular transfer function (MTF) is dependent on source depth; hence, this study underscores the importance of using methods which reduce the depth dependence of the effective MTF prior to applying restoration filters. A novel method of estimating the power spectrum of the true image from that of the degraded images is also described and evaluated. Wiener restoration filters based on this spectral estimation method are found to be competitive with the image-dependent Metz restoration filter. PMID:18222751

  5. Perigenual anterior cingulate event-related potential precedes stop signal errors

    PubMed Central

    Chang, Andrew; Chen, Chien-Chung; Li, Hsin-Hung; Li, Chiang-Shan R.

    2015-01-01

    Momentary lapses in attention disrupt goal-directed behavior. Attentional lapse has been associated with increased “default-mode” network (DMN) activity. In our previous fMRI study of a stop signal task (SST), greater activation of the perigenual anterior cingulate cortex (pgACC) – an important node of the DMN – predicts stop signal errors. In event-related potential (ERP) studies, the amplitude of an error-preceding positivity (EPP) also predicts response error. However, it is not clear whether the EPP originates from DMN regions. Here, we combined high-density array EEG and an SST to examine response-locked ERPs of error preceding trials in twenty young adult participants. The results showed an EPP in go trials that preceded stop error than stop success trials. Importantly, source modeling identified the origin of the EPP in the pgACC. By employing a bootstrapping procedure, we further confirmed that pgACC rather than the dorsal ACC as the source provides a better fit to the EPP. Together, these results suggest that attentional lapse in association with EPP in the pgACC anticipates failure in response inhibition. PMID:25700955

  6. Age-related reduction of the confidence-accuracy relationship in episodic memory: effects of recollection quality and retrieval monitoring.

    PubMed

    Wong, Jessica T; Cramer, Stefanie J; Gallo, David A

    2012-12-01

    We investigated age-related reductions in episodic metamemory accuracy. Participants studied pictures and words in different colors and then took forced-choice recollection tests. These tests required recollection of the earlier presentation color, holding familiarity of the response options constant. Metamemory accuracy was assessed for each participant by comparing recollection test accuracy with corresponding confidence judgments. We found that recollection test accuracy was greater in younger than older adults and also for pictures than font color. Metamemory accuracy tracked each of these recollection differences, as well as individual differences in recollection test accuracy within each age group, suggesting that recollection ability affects metamemory accuracy. Critically, the age-related impairment in metamemory accuracy persisted even when the groups were matched on recollection test accuracy, suggesting that metamemory declines were not entirely due to differences in recollection frequency or quantity, but that differences in recollection quality and/or monitoring also played a role. We also found that age-related impairments in recollection and metamemory accuracy were equivalent for pictures and font colors. This result contrasted with previous false recognition findings, which predicted that older adults would be differentially impaired when monitoring memory for less distinctive memories. These and other results suggest that age-related reductions in metamemory accuracy are not entirely attributable to false recognition effects, but also depend heavily on deficient recollection and/or monitoring of specific details associated with studied stimuli. PMID:22449027

  7. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times.

    PubMed

    Cavanagh, James F

    2015-04-15

    Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making. PMID:25676913

  8. Error processing and response inhibition in excessive computer game players: an event-related potential study.

    PubMed

    Littel, Marianne; van den Berg, Ivo; Luijten, Maartje; van Rooij, Antonius J; Keemink, Lianne; Franken, Ingmar H A

    2012-09-01

    Excessive computer gaming has recently been proposed as a possible pathological illness. However, research on this topic is still in its infancy and underlying neurobiological mechanisms have not yet been identified. The determination of underlying mechanisms of excessive gaming might be useful for the identification of those at risk, a better understanding of the behavior and the development of interventions. Excessive gaming has been often compared with pathological gambling and substance use disorder. Both disorders are characterized by high levels of impulsivity, which incorporates deficits in error processing and response inhibition. The present study aimed to investigate error processing and response inhibition in excessive gamers and controls using a Go/NoGo paradigm combined with event-related potential recordings. Results indicated that excessive gamers show reduced error-related negativity amplitudes in response to incorrect trials relative to correct trials, implying poor error processing in this population. Furthermore, excessive gamers display higher levels of self-reported impulsivity as well as more impulsive responding as reflected by less behavioral inhibition on the Go/NoGo task. The present study indicates that excessive gaming partly parallels impulse control and substance use disorders regarding impulsivity measured on the self-reported, behavioral and electrophysiological level. Although the present study does not allow drawing firm conclusions on causality, it might be that trait impulsivity, poor error processing and diminished behavioral response inhibition underlie the excessive gaming patterns observed in certain individuals. They might be less sensitive to negative consequences of gaming and therefore continue their behavior despite adverse consequences. PMID:22734609

  9. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  10. Automated measurement of centering errors and relative surface distances for the optimized assembly of micro-optics

    NASA Astrophysics Data System (ADS)

    Langehanenberg, Patrik; Dumitrescu, Eugen; Heinisch, Josef; Krey, Stefan; Ruprecht, Aiko K.

    2011-03-01

    For any kind of optical compound systems the precise geometric alignment of every single element according to the optical design is essential to obtain the desired imaging properties. In this contribution we present a measurement system for the determination of the complete set of geometric alignment parameters in assembled systems. The deviation of each center or curvature with respect to a reference axis is measured with an autocollimator system. These data are further processed in order to provide the shift and tilt of an individual lens or group of lenses with respect to a defined reference axis. Previously it was shown that such an instrument can measure the centering errors of up to 40 surfaces within a system under test with accuracies in the range of an arc second. In addition, the relative distances of the optical surfaces (center thicknesses of lens elements, air gaps in between) are optically determined in the same measurement system by means of low coherent interferometry. Subsequently, the acquired results can be applied for the compensation of the detected geometric alignment errors before the assembly is finally bonded (e.g., glued). The presented applications mainly include measurements of miniaturized lens systems like mobile phone optics. However, any type of objective lens from endoscope imaging systems up to very complex objective lenses used in microlithography can be analyzed with the presented measurement system.

  11. RapidEye constellation relative radiometric accuracy measurement using lunar images

    NASA Astrophysics Data System (ADS)

    Steyn, Joe; Tyc, George; Beckett, Keith; Hashida, Yoshi

    2009-09-01

    The RapidEye constellation includes five identical satellites in Low Earth Orbit (LEO). Each satellite has a 5-band (blue, green, red, red-edge and near infrared (NIR)) multispectral imager at 6.5m GSD. A three-axes attitude control system allows pointing the imager of each satellite at the Moon during lunations. It is therefore possible to image the Moon from near identical viewing geometry within a span of 80 minutes with each one of the imagers. Comparing the radiometrically corrected images obtained from each band and each satellite allows a near instantaneous relative radiometric accuracy measurement and determination of relative gain changes between the five imagers. A more traditional terrestrial vicarious radiometric calibration program has also been completed by MDA on RapidEye. The two components of this program provide for spatial radiometric calibration ensuring that detector-to-detector response remains flat, while a temporal radiometric calibration approach has accumulated images of specific dry dessert calibration sites. These images are used to measure the constellation relative radiometric response and make on-ground gain and offset adjustments in order to maintain the relative accuracy of the constellation within +/-2.5%. A quantitative comparison between the gain changes measured by the lunar method and the terrestrial temporal radiometric calibration method is performed and will be presented.

  12. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    PubMed Central

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  13. Simulation of Systematic Errors in Phase-Referenced VLBI Astrometry

    NASA Astrophysics Data System (ADS)

    Pradel, N.; Charlot, P.; Lestrade, J.-F.

    2005-12-01

    The astrometric accuracy in the relative coordinates of two angularly-close radio sources observed with the phase-referencing VLBI technique is limited by systematic errors. These include geometric errors and atmospheric errors. Based on simulation with the SPRINT software, we evaluate the impact of these errors in the estimated relative source coordinates for standard VLBA observations. Such evaluations are useful to estimate the actual accuracy of phase-referenced VLBI astrometry.

  14. Errare machinale est: the use of error-related potentials in brain-machine interfaces

    PubMed Central

    Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.

    2014-01-01

    The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937

  15. Software platform for managing the classification of error- related potentials of observers

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  16. Field Independence/Dependence, Hemispheric Specialization, and Attitude in Relation to Pronunciation Accuracy in Spanish as a Foreign Language.

    ERIC Educational Resources Information Center

    Elliott, A. Raymond

    1995-01-01

    Sixty-six college students enrolled in an intermediate Spanish course were measured on 12 variables believed to be related to pronunciation accuracy. Variables that related most to pronunciation accuracy included individual concern for pronunciation, subject's degree of field independence, and subject's degree of right hemispheric specialization…

  17. Accuracy of a Digital Weight Scale Relative to the Nintendo Wii in Measuring Limb Load Asymmetry

    PubMed Central

    Kumar, NS Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah

    2014-01-01

    [Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry. PMID:25202181

  18. Greater externalizing personality traits predict less error-related insula and anterior cingulate cortex activity in acutely abstinent cigarette smokers

    PubMed Central

    Carroll, Allison J.; Sutherland, Matthew T.; Salmeron, Betty Jo; Ross, Thomas J.; Stein, Elliot A.

    2014-01-01

    Attenuated activity in performance-monitoring brain regions following erroneous actions may contribute to the repetition of maladaptive behaviors such as continued drug use. Externalizing is a broad personality construct characterized by deficient impulse control, vulnerability to addiction, and reduced neurobiological indices of error processing. The insula and dorsal anterior cingulate cortex (dACC) are regions critically linked with error processing as well as the perpetuation of cigarette smoking. As such, we examined the interrelations between externalizing tendencies, erroneous task performance, and error-related insula and dACC activity in overnight-deprived smokers (n=24) and nonsmokers (n=20). Participants completed a self-report measure assessing externalizing tendencies (Externalizing Spectrum Inventory) and a speeded Flanker task during fMRI scanning. We observed that higher externalizing tendencies correlated with the occurrence of more performance errors among smokers but not nonsmokers. Suggesting a neurobiological contribution to such sub-optimal performance among smokers, higher externalizing also predicted less recruitment of the right insula and dACC following error commission. Critically, this error-related activity fully mediated the relationship between externalizing traits and error rates. That is, higher externalizing scores predicted less error-related right insula and dACC activity and, in turn, less error-related activity predicted more errors. Relating such regional activity with a clinically-relevant construct, less error-related right insula and dACC responses correlated with higher tobacco craving during abstinence. Given that inadequate error-related neuronal responses may contribute to continued drug use despite negative consequences, these results suggest that externalizing tendencies and/or compromised error processing among subsets of smokers may be relevant factors for smoking cessation success. PMID:24354662

  19. The Relative Effectiveness of Signaling Systems: Relying on External Items Reduces Signaling Accuracy while Leks Increase Accuracy

    PubMed Central

    Leighton, Gavin M.

    2014-01-01

    Multiple evolutionary phenomena require individual animals to assess conspecifics based on behaviors, morphology, or both. Both behavior and morphology can provide information about individuals and are often used as signals to convey information about quality, motivation, or energetic output. In certain cases, conspecific receivers of this information must rank these signaling individuals based on specific traits. The efficacy of information transfer associated within a signal is likely related to the type of trait used to signal, though few studies have investigated the relative effectiveness of contrasting signaling systems. I present a set of models that represent a large portion of signaling systems and compare them in terms of the ability of receivers to rank signalers accurately. Receivers more accurately assess signalers if the signalers use traits that do not require non-food resources; similarly, receivers more accurately ranked signalers if all the signalers could be observed simultaneously, similar to leks. Surprisingly, I also found that receivers are only slightly better at ranking signaler effort if the effort results in a cumulative structure. This series of findings suggests that receivers may attend to specific traits because the traits provide more information relative to others; and similarly, these results may explain the preponderance of morphological and behavioral display signals. PMID:24626221

  20. Accuracy of recall of information about a cancer-predisposing BRCA1/2 gene mutation among patients and relatives

    PubMed Central

    Jacobs, Chris; Dancyger, Caroline; Smith, Jonathan A; Michie, Susan

    2015-01-01

    This observational study aimed to (i) compare the accuracy of information recalled by patients and relatives following genetic counselling about a newly identified BRCA1/2 mutation, (ii) identify differences in accuracy of information about genetics and hereditary cancer and (iii) investigate whether accuracy among relatives improved when information was provided directly by genetics health professionals. Semistructured interviews following results from consultations with 10 breast/ovarian cancer patients and 22 relatives were audio-recorded and transcribed. Information provided by the genetics health professional was tracked through the families and coded for accuracy. Accuracy was analysed using the Wilcoxon Signed-Ranks test. Sources of information were tested using Spearman's rank-order correlation coefficient. Fifty-three percent of the information recalled by patients was accurate. Accuracy of recall among relatives was significantly lower than that among patients (P=0.017). Both groups recalled a lower proportion of information about hereditary cancer than about genetics (P=0.005). Relatives who learnt the information from the patient alone recalled significantly less accurate information than those informed directly by genetics health professionals (P=0.001). Following genetic counselling about a BRCA1/2 mutation, accuracy of recall was low among patients and relatives, particularly about hereditary cancer. Multiple sources of information, including direct contact with genetics health professionals, may improve the accuracy of information among relatives. PMID:24848747

  1. Using Simulation to Address Hierarchy-Related Errors in Medical Practice

    PubMed Central

    Calhoun, Aaron William; Boone, Megan C; Porter, Melissa B; Miller, Karen H

    2014-01-01

    Objective: Hierarchy, the unavoidable authority gradients that exist within and between clinical disciplines, can lead to significant patient harm in high-risk situations if not mitigated. High-fidelity simulation is a powerful means of addressing this issue in a reproducible manner, but participant psychological safety must be assured. Our institution experienced a hierarchy-related medication error that we subsequently addressed using simulation. The purpose of this article is to discuss the implementation and outcome of these simulations. Methods: Script and simulation flowcharts were developed to replicate the case. Each session included the use of faculty misdirection to precipitate the error. Care was taken to assure psychological safety via carefully conducted briefing and debriefing periods. Case outcomes were assessed using the validated Team Performance During Simulated Crises Instrument. Gap analysis was used to quantify team self-insight. Session content was analyzed via video review. Results: Five sessions were conducted (3 in the pediatric intensive care unit and 2 in the Pediatric Emergency Department). The team was unsuccessful at addressing the error in 4 (80%) of 5 cases. Trends toward lower communication scores (3.4/5 vs 2.3/5), as well as poor team self-assessment of communicative ability, were noted in unsuccessful sessions. Learners had a positive impression of the case. Conclusions: Simulation is a useful means to replicate hierarchy error in an educational environment. This methodology was viewed positively by learner teams, suggesting that psychological safety was maintained. Teams that did not address the error successfully may have impaired self-assessment ability in the communication skill domain. PMID:24867545

  2. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  3. Valence-separated representation of reward prediction error in feedback-related negativity and positivity.

    PubMed

    Bai, Yu; Katahira, Kentaro; Ohira, Hideki

    2015-02-11

    Feedback-related negativity (FRN) is an event-related brain potential (ERP) component elicited by errors and negative outcomes. Previous studies proposed that FRN reflects the activity of a general error-processing system that incorporates reward prediction error (RPE). However, other studies reported inconsistent results on this issue - namely, that FRN only reflects the valence of feedback and that the magnitude of RPE is reflected by the other ERP component called P300. The present study focused on the relationship between the FRN amplitude and RPE. ERPs were recorded during a reversal learning task performed by the participants, and a computational model was used to estimate trial-by-trial RPEs, which we correlated with the ERPs. The results indicated that FRN and P300 reflected the magnitude of RPE in negative outcomes and positive outcomes, respectively. In addition, the correlation between RPE and the P300 amplitude was stronger than the correlation between RPE and the FRN amplitude. These differences in the correlation between ERP and RPE components may explain the inconsistent results reported by previous studies; the asymmetry in the correlations might make it difficult to detect the effect of the RPE magnitude on the FRN and makes it appear that the FRN only reflects the valence of feedback. PMID:25634316

  4. Assessing the relative accuracies of two screening tests in the presence of verification bias.

    PubMed

    Zhou, X H; Higgs, R E

    Epidemiological studies of dementia often use two-stage designs because of the relatively low prevalence of the disease and the high cost of ascertaining a diagnosis. The first stage of a two-stage design assesses a large sample with a screening instrument. Then, the subjects are grouped according to their performance on the screening instrument, such as poor, intermediate and good performers. The second stage involves a more extensive diagnostic procedure, such as a clinical assessment, for a particular subset of the study sample selected from each of these groups. However, not all selected subjects have the clinical diagnosis because some subjects may refuse and others are unable to be clinically assessed. Thus, some subjects screened do not have a clinical diagnosis. Furthermore, whether a subject has a clinical diagnosis depends not only on the screening test result but also on other factors, and the sampling fractions for the diagnosis are unknown and have to be estimated. One of the goals in these studies is to assess the relative accuracies of two screening tests. Any analysis using only verified cases may result in verification bias. In this paper, we propose the use of two bootstrap methods to construct confidence intervals for the difference in the accuracies of two screening tests in the presence of verification bias. We illustrate the application of the proposed methods to a simulated data set from a real two-stage study of dementia that has motivated this research. PMID:10844728

  5. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  6. Errors can be related to pre-stimulus differences in ERP topography and their concomitant sources.

    PubMed

    Britz, Juliane; Michel, Christoph M

    2010-02-01

    Much of the variation in both neuronal and behavioral responses to stimuli can be explained by pre-stimulus fluctuations in brain activity. We hypothesized that also errors are the result of stochastic fluctuations in pre-stimulus activity and investigated the temporal dynamics of the scalp topography and their concomitant intracranial generators of stimulus- and response-locked high-density event-related potentials (ERPs) to errors and correct trials in a Stroop task. We found significant differences in ERP map topography and intracranial sources before the onset of the stimulus and after the initiation of the response but not as a function of stimulus-induced conflict. Before the stimulus, topographic differences were accompanied by differential activity in lateral frontal, parietal and temporal areas known to be involved in voluntary reorientation of attention and cognitive control. Differential post-response activity propagated both medially and laterally on a rostral-caudal axis of a network typically involved in performance monitoring. Analysis of the statistical properties of error occurrences revealed their stochasticity. PMID:19850140

  7. Abnormal error-related antisaccade activation in premanifest and early manifest Huntington disease

    PubMed Central

    Rupp, J.; Dzemidzic, M.; Blekher, T.; Bragulat, V.; West, J.; Jackson, J.; Hui, S.; Wojcieszek, J.; Saykin, A.J.; Kareken, D.; Foroud, T.

    2010-01-01

    Objective Individuals with the trinucleotide CAG expansion (CAG+) that causes Huntington disease (HD) have impaired performance on antisaccade (AS) tasks that require directing gaze in the mirror opposite direction of visual targets. This study aimed to identify the neural substrates underlying altered antisaccadic performance. Method Three groups of participants were recruited: 1) Imminent and early manifest HD (early HD, n=8); 2) premanifest (presymptomatic) CAG+ (preHD, n=10); and 3) CAG unexpanded (CAG−) controls (n=12). All participants completed a uniform study visit that included a neurological evaluation, neuropsychological battery, molecular testing, and functional magnetic resonance imaging during an AS task. The blood oxygenation level dependent (BOLD) response was obtained during saccade preparation and saccade execution for both correct and incorrect responses using regression analysis. Results Significant group differences in BOLD response were observed when comparing incorrect AS to correct AS execution. Specifically, as the percentage of incorrect AS increased, BOLD responses in the CAG− group decreased progressively in a well-documented reward detection network that includes the pre-supplementary motor area and dorsal anterior cingulate cortex. In contrast, AS errors in the preHD and early HD groups lacked this relationship with BOLD signal in the error detection network, and BOLD responses to AS errors were smaller in the two CAG+ groups as compared with the CAG− group. Conclusions These results are the first to suggest that abnormalities in an error-related response network may underlie early changes in AS eye movements in premanifest and early manifest HD. PMID:21401260

  8. Aberrant error processing in relation to symptom severity in obsessive–compulsive disorder: A multimodal neuroimaging study

    PubMed Central

    Agam, Yigal; Greenberg, Jennifer L.; Isom, Marlisa; Falkenstein, Martha J.; Jenike, Eric; Wilhelm, Sabine; Manoach, Dara S.

    2014-01-01

    Background Obsessive–compulsive disorder (OCD) is characterized by maladaptive repetitive behaviors that persist despite feedback. Using multimodal neuroimaging, we tested the hypothesis that this behavioral rigidity reflects impaired use of behavioral outcomes (here, errors) to adaptively adjust responses. We measured both neural responses to errors and adjustments in the subsequent trial to determine whether abnormalities correlate with symptom severity. Since error processing depends on communication between the anterior and the posterior cingulate cortex, we also examined the integrity of the cingulum bundle with diffusion tensor imaging. Methods Participants performed the same antisaccade task during functional MRI and electroencephalography sessions. We measured error-related activation of the anterior cingulate cortex (ACC) and the error-related negativity (ERN). We also examined post-error adjustments, indexed by changes in activation of the default network in trials surrounding errors. Results OCD patients showed intact error-related ACC activation and ERN, but abnormal adjustments in the post- vs. pre-error trial. Relative to controls, who responded to errors by deactivating the default network, OCD patients showed increased default network activation including in the rostral ACC (rACC). Greater rACC activation in the post-error trial correlated with more severe compulsions. Patients also showed increased fractional anisotropy (FA) in the white matter underlying rACC. Conclusions Impaired use of behavioral outcomes to adaptively adjust neural responses may contribute to symptoms in OCD. The rACC locus of abnormal adjustment and relations with symptoms suggests difficulty suppressing emotional responses to aversive, unexpected events (e.g., errors). Increased structural connectivity of this paralimbic default network region may contribute to this impairment. PMID:25057466

  9. Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Iturrate, I.; Montesano, L.; Minguez, J.

    2013-04-01

    Objective. A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user’s mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials.Approach. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance.Results and significance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.

  10. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    SciTech Connect

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article

  11. Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

    PubMed

    Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2016-05-15

    Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing. PMID:26908319

  12. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  13. Accuracy of interpolation techniques for the derivation of digital elevation models in relation to landform types and data density

    NASA Astrophysics Data System (ADS)

    Chaplot, Vincent; Darboux, Frédéric; Bourennane, Hocine; Leguédois, Sophie; Silvera, Norbert; Phachomphon, Konngkeo

    2006-07-01

    One of the most important scientific challenges of digital elevation modeling is the development of numerical representations of large areas with a high resolution. Although there have been many studies on the accuracy of interpolation techniques for the generation of digital elevation models (DEMs) in relation to landform types and data quantity or density, there is still a need to evaluate the performance of these techniques on natural landscapes of differing morphologies and over a large range of scales. To perform such an evaluation, we investigated a total of six sites, three in the mountainous region of northern Laos and three in the more gentle landscape of western France, with various surface areas from micro-plots, hillslopes, and catchments. The techniques used for the interpolation of point height data with density values from 4 to 10 9 points/km 2 include: inverse distance weighting (IDW), ordinary kriging (OK), universal kriging (UK), multiquadratic radial basis function (MRBF), and regularized spline with tension (RST). The study sites exhibited coefficients of variation (CV) of altitude between 12% and 78%, and isotropic to anisotropic spatial structures with strengths from weak (with a nugget/sill ratio of 0.8) to strong (0.01). Irrespective of the spatial scales or the variability and spatial structure of altitude, few differences existed between the interpolation methods if the sampling density was high, although MRBF performed slightly better. However, at lower sampling densities, kriging yielded the best estimations for landscapes with strong spatial structure, low CV and low anisotropy, while RST yielded the best estimations for landscapes with low CV and weak spatial structure. Under conditions of high CV, strong spatial structure and strong anisotropy, IDW performed slightly better than the other method. The prediction errors in height estimation are discussed in relation to the possible interactions with spatial scale, landform types, and

  14. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    NASA Astrophysics Data System (ADS)

    Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.

    2012-12-01

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  15. System performance and performance enhancement relative to element position location errors for distributed linear antenna arrays

    NASA Astrophysics Data System (ADS)

    Adrian, Andrew

    For the most part, antenna phased arrays have traditionally been comprised of antenna elements that are very carefully and precisely placed in very periodic grid structures. Additionally, the relative positions of the elements to each other are typically mechanically fixed as best as possible. There is never an assumption the relative positions of the elements are a function of time or some random behavior. In fact, every array design is typically analyzed for necessary element position tolerances in order to meet necessary performance requirements such as directivity, beamwidth, sidelobe level, and beam scanning capability. Consider an antenna array that is composed of several radiating elements, but the position of each of the elements is not rigidly, mechanically fixed like a traditional array. This is not to say that the element placement structure is ignored or irrelevant, but each element is not always in its relative, desired location. Relative element positioning would be analogous to a flock of birds in flight or a swarm of insects. They tend to maintain a near fixed position with the group, but not always. In the antenna array analog, it would be desirable to maintain a fixed formation, but due to other random processes, it is not always possible to maintain perfect formation. This type of antenna array is referred to as a distributed antenna array. A distributed antenna array's inability to maintain perfect formation causes degradations in the antenna factor pattern of the array. Directivity, beamwidth, sidelobe level and beam pointing error are all adversely affected by element relative position error. This impact is studied as a function of element relative position error for linear antenna arrays. The study is performed over several nominal array element spacings, from lambda to lambda, several sidelobe levels (20 to 50 dB) and across multiple array illumination tapers. Knowing the variation in performance, work is also performed to utilize a minimum

  16. Investigating the epidemiology of medication errors and error-related adverse drug events (ADEs) in primary care, ambulatory care and home settings: a systematic review protocol

    PubMed Central

    Assiri, Ghadah Asaad; Grant, Liz; Aljadhey, Hisham; Sheikh, Aziz

    2016-01-01

    Introduction There is a need to better understand the epidemiology of medication errors and error-related adverse events in community care contexts. Methods and analysis We will systematically search the following databases: Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Eastern Mediterranean Regional Office of the WHO (EMRO), MEDLINE, PsycINFO and Web of Science. In addition, we will search Google Scholar and contact an international panel of experts to search for unpublished and in progress work. The searches will cover the time period January 1990–December 2015 and will yield data on the incidence or prevalence of and risk factors for medication errors and error-related adverse drug events in adults living in community settings (ie, primary care, ambulatory and home). Study quality will be assessed using the Critical Appraisal Skills Program quality assessment tool for cohort and case–control studies, and cross-sectional studies will be assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Descriptive Studies. Meta-analyses will be undertaken using random-effects modelling using STATA (V.14) statistical software. Ethics and dissemination This protocol will be registered with PROSPERO, an international prospective register of systematic reviews, and the systematic review will be reported in the peer-reviewed literature using Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PMID:27580826

  17. Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity.

    PubMed

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L

    2016-03-23

    The error-related negativity (ERN or Ne) is a negative event-related brain potential that peaks about 20-100ms after people perform an incorrect response in choice reaction time tasks. Prior research has shown that the ERN may be enhanced by situational and dispositional factors that promote intrinsic motivation. Building on and extending this work the authors hypothesized that simulated interpersonal touch may increase task engagement and thereby increase ERN amplitude. To test this notion, 20 participants performed a Go/No-Go task while holding a teddy bear or a same-sized cardboard box. As expected, the ERN was significantly larger when participants held a teddy bear rather than a cardboard box. This effect was most pronounced for people high (rather than low) in trait intrinsic motivation, who may depend more on intrinsically motivating task cues to maintain task engagement. These findings highlight the potential benefits of simulated interpersonal touch in stimulating attention to errors, especially among people who are intrinsically motivated. PMID:26876476

  18. Medication errors related to transdermal opioid patches: lessons from a regional incident reporting system

    PubMed Central

    2014-01-01

    Objective A few cases of adverse reactions linked to erroneous use of transdermal opioid patches have been reported in the literature. The aim of this study was to describe and characterize medication errors (MEs) associated with use of transdermal fentanyl and buprenorphine. Methods All events concerning transdermal opioid patches reported between 2004 and 2011 to a regional incident reporting system and assessed as MEs were scrutinized and characterized. MEs were defined as “a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient”. Results In the study 151 MEs were identified. The three most common error types were wrong administration time 67 (44%), wrong dose 34 (23%), and omission of dose 20 (13%). Of all MEs, 118 (78%) occurred in the administration stage of the medication process. Harm was reported in 26 (17%) of the included cases, of which 2 (1%) were regarded as serious harm (nausea/vomiting and respiratory depression). Pain was the most common adverse reaction reported. Conclusions Of the reported MEs related to transdermal fentanyl and buprenorphine, most occurred during administration. Improved routines to ascertain correct and timely administration and educational interventions to reduce MEs for these drugs are warranted. PMID:24912424

  19. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  20. Impacts of visuomotor sequence learning methods on speed and accuracy: Starting over from the beginning or from the point of error.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2016-02-01

    The present study examined whether sequence learning led to more accurate and shorter performance time if people who are learning a sequence start over from the beginning when they make an error (i.e., practice the whole sequence) or only from the point of error (i.e., practice a part of the sequence). We used a visuomotor sequence learning paradigm with a trial-and-error procedure. In Experiment 1, we found fewer errors, and shorter performance time for those who restarted their performance from the beginning of the sequence as compared to those who restarted from the point at which an error occurred, indicating better learning of spatial and motor representations of the sequence. This might be because the learned elements were repeated when the next performance started over from the beginning. In subsequent experiments, we increased the occasions for the repetitions of learned elements by modulating the number of fresh start points in the sequence after errors. The results showed that fewer fresh start points were likely to lead to fewer errors and shorter performance time, indicating that the repetitions of learned elements enabled participants to develop stronger spatial and motor representations of the sequence. Thus, a single or two fresh start points in the sequence (i.e., starting over only from the beginning or from the beginning or midpoint of the sequence after errors) is likely to lead to more accurate and faster performance. PMID:26829021

  1. The effect of errors in the assignment of the transmission functions on the accuracy of the thermal sounding of the atmosphere

    NASA Technical Reports Server (NTRS)

    Timofeyev, Y. M.

    1979-01-01

    In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.

  2. A 2 x 2 Taxonomy of Multilevel Latent Contextual Models: Accuracy-Bias Trade-Offs in Full and Partial Error Correction Models

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich

    2011-01-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…

  3. Deficits in Error-Monitoring by College Students with Schizotypal Traits: An Event-Related Potential Study

    PubMed Central

    Kim, Seo-Hee; Jang, Kyoung-Mi; Kim, Myung-Sun

    2015-01-01

    The present study used event-related potentials (ERPs) to investigate deficits in error-monitoring by college students with schizotypal traits. Scores on the Schizotypal Personality Questionnaire (SPQ) were used to categorize the participants into schizotypal-trait (n = 17) and normal control (n = 20) groups. The error-monitoring abilities of the participants were evaluated using the Simon task, which consists of congruent (locations of stimulus and response are the same) and incongruent (locations of stimulus and response are different) conditions. The schizotypal-trait group committed more errors on the Simon task and exhibited smaller error-related negativity (ERN) amplitudes than did the control group. Additionally, ERN amplitude measured at FCz was negatively correlated with the error rate on the Simon task in the schizotypal-trait group but not in the control group. The two groups did not differ in terms of correct-related potentials (CRN), error positivity (Pe) and correct-related positivity (Pc) amplitudes. The present results indicate that individuals with schizotypal traits have deficits in error-monitoring and that reduced ERN amplitudes may represent a biological marker of schizophrenia. PMID:25826220

  4. Three-dimensional transient elastodynamic inversion using the modified error in constitutive relation

    NASA Astrophysics Data System (ADS)

    Bonnet, Marc; Aquino, Wilkins

    2014-10-01

    This work is concerned with large-scale three-dimensional inversion under transient elastodynamic conditions by means of the modified error in constitutive relation (MECR), an energy-based cost functional. A peculiarity of time-domain MECR formulations is that each evaluation involves the computation of two elastodynamic states (one forward, one backward) which moreover are coupled. This coupling creates a major computational bottleneck, making MECR-based inversion difficult for spatially 2D or 3D configurations. To overcome this obstacle, we propose an approach whose main ingredients are (a) setting the entire computational procedure in a consistent time-discrete framework that incorporates the chosen time-stepping algorithm, and (b) using an iterative SOR-like method for the resulting stationarity equations. The resulting MECR-based inversion algorithm is demonstrated on a 3D transient elastodynamic example involving over 500,000 unknown elastic moduli.

  5. Task motivation influences alpha suppression following errors.

    PubMed

    Compton, Rebecca J; Bissey, Bryn; Worby-Selim, Sharoda

    2014-07-01

    The goal of the present research is to examine the influence of motivation on a novel error-related neural marker, error-related alpha suppression (ERAS). Participants completed an attentionally demanding flanker task under conditions that emphasized either speed or accuracy or under conditions that manipulated the monetary value of errors. Conditions in which errors had greater motivational value produced greater ERAS, that is, greater alpha suppression following errors compared to correct trials. A second study found that a manipulation of task difficulty did not affect ERAS. Together, the results confirm that ERAS is both a robust phenomenon and one that is sensitive to motivational factors. PMID:24673621

  6. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  7. Diagnostic Accuracy of Mucosal Biopsy versus Endoscopic Mucosal Resection in Barrett's Esophagus and Related Superficial Lesions.

    PubMed

    Elsadek, Hany M; Radwan, Mamdouh M

    2015-01-01

    Background. Endoscopic surveillance for early detection of dysplastic or neoplastic changes in patients with Barrett's esophagus (BE) depends usually on biopsy. The diagnostic and therapeutic role of endoscopic mucosal resection (EMR) in BE is rapidly growing. Objective. The aim of this study was to check the accuracy of biopsy for precise histopathologic diagnosis of dysplasia and neoplasia, compared to EMR in patients having BE and related superficial esophageal lesions. Methods. A total of 48 patients with previously diagnosed BE (36 men, 12 women, mean age 49.75 ± 13.3 years) underwent routine surveillance endoscopic examination. Biopsies were taken from superficial lesions, if present, and otherwise from BE segments. Then, EMR was performed within three weeks. Results. Biopsy based histopathologic diagnoses were nondysplastic BE (NDBE), 22 cases; low-grade dysplasia (LGD), 14 cases; high-grade dysplasia (HGD), 8 cases; intramucosal carcinoma (IMC), two cases; and invasive adenocarcinoma (IAC), two cases. EMR based diagnosis differed from biopsy based diagnosis (either upgrading or downgrading) in 20 cases (41.67%), (Kappa = 0.43, 95% CI: 0.170-0.69). Conclusions. Biopsy is not a satisfactory method for accurate diagnosis of dysplastic or neoplastic changes in BE patients with or without suspicious superficial lesions. EMR should therefore be the preferred diagnostic method in such patients. PMID:27347544

  8. Age-related differences in pointing accuracy in familiar and unfamiliar environments.

    PubMed

    Muffato, Veronica; Della Giustina, Martina; Meneghetti, Chiara; De Beni, Rossana

    2015-09-01

    This study aimed to investigate age-related differences in spatial mental representations of familiar and unfamiliar places. Nineteen young adults (aged 18-23) and 19 older adults (aged 60-74), all living in the same Italian town, completed a set of visuospatial measures and then pointed in the direction of familiar landmarks in their town and in the direction of landmarks in an unknown environment studied on a map. Results showed that older adults were less accurate in the visuospatial tasks and in pointing at landmarks in an unfamiliar environment, but performed as well as the young adults when pointing to familiar places. Pointing performance correlated with visuospatial tests accuracy in both familiar and unfamiliar environments, while only pointing in an unknown environment correlated with visuospatial working memory (VSWM). The spatial representation of well-known places seems to be well preserved in older adults (just as well as in young adults), while it declines for unfamiliar environments. Spatial abilities sustain the mental representations of both familiar and unfamiliar environments, while the support of VSWM resources is only needed for the latter. PMID:26224272

  9. Interpolation in waveform space: Enhancing the accuracy of gravitational waveform families using numerical relativity

    NASA Astrophysics Data System (ADS)

    Cannon, Kipp; Emberson, J. D.; Hanna, Chad; Keppel, Drew; Pfeiffer, Harald P.

    2013-02-01

    Matched filtering for the identification of compact object mergers in gravitational wave antenna data involves the comparison of the data stream to a bank of template gravitational waveforms. Typically the template bank is constructed from phenomenological waveform models, since these can be evaluated for an arbitrary choice of physical parameters. Recently it has been proposed that singular value decomposition (SVD) can be used to reduce the number of templates required for detection. As we show here, another benefit of SVD is its removal of biases from the phenomenological templates along with a corresponding improvement in their ability to represent waveform signals obtained from numerical relativity (NR) simulations. Using these ideas, we present a method that calibrates a reduced SVD basis of phenomenological waveforms against NR waveforms in order to construct a new waveform approximant with improved accuracy and faithfulness compared to the original phenomenological model. The new waveform family is given numerically through the interpolation of the projection coefficients of NR waveforms expanded onto the reduced basis and provides a generalized scheme for enhancing phenomenological models.

  10. Post-glacial landforms dating by lichenometry in Iceland - the accuracy of relative results and conversely

    NASA Astrophysics Data System (ADS)

    Decaulne, Armelle

    2014-05-01

    Lichenometry studies are carried out in Iceland since 1970 all over the country, using various techniques to solve a range of geomorphologic issues, from moraine dating and glacial advances, outwash timing, proglacial river incision, soil erosion, rock-glacier development, climate variations, to debris-flow occurrence and extreme snow-avalanche frequency. Most users have sought to date proglacial landforms in two main areas, around the southern ice-caps of Vatnajökull and Myrdalsjökull; and in Tröllaskagi in northern Iceland. Based on the results of over thirty five published studies, lichenometry is deemed to be successful dating tool in Iceland, and seems to approach an absolute dating technique at least over the last hundred years, under well constrained environmental conditions at local scale. With an increasing awareness of the methodological limitations of the technique, together with more sophisticated data treatments, predicted lichenometric 'ages' are supposedly gaining in robustness and in precision. However, comparisons between regions, and even between studies in the same area, are hindered by the use of different measurement techniques and data processing. These issues are exacerbated in Iceland by rapid environmental changes across short distances and, more generally, by the common problems surrounding lichen species mis-identification in the field; not mentioning the age discrepancy offered by other dating tools, such as tephrochronology. Some authors claim lichenometry can help to a precise reconstruction of landforms and geomorphic processes in Iceland, proposing yearly dating, others includes margin errors in their reconstructions, while some limit its use to generation identifications, refusing to overpass the nature of the gathered data and further interpretation. Finally, can lichenometry be a relatively accurate dating technique or rather an accurate relative dating tool in Iceland?

  11. Type I Error Inflation in the Traditional By-Participant Analysis to Metamemory Accuracy: A Generalized Mixed-Effects Model Perspective

    ERIC Educational Resources Information Center

    Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M.

    2014-01-01

    In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…

  12. Harsh Parenting and Fearfulness in Toddlerhood Interact to Predict Amplitudes of Preschool Error-Related Negativity

    PubMed Central

    Brooker, Rebecca J.; Buss, Kristin A.

    2014-01-01

    Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children’s risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN), an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems. PMID:24721466

  13. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID

  14. Motivation and semantic context affect brain error-monitoring activity: an event-related brain potentials study.

    PubMed

    Ganushchak, Lesya Y; Schiller, Niels O

    2008-01-01

    During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses. PMID:17920932

  15. Relation of probability of causation to relative risk and doubling dose: a methodologic error that has become a social problem.

    PubMed Central

    Greenland, S

    1999-01-01

    Epidemiologists, biostatisticians, and health physicists frequently serve as expert consultants to lawyers, courts, and administrators. One of the most common errors committed by experts is to equate, without qualification, the attributable fraction estimated from epidemiologic data to the probability of causation requested by courts and administrators. This error has become so pervasive that it has been incorporated into judicial precedents and legislation. This commentary provides a brief overview of the error and the context in which it arises. PMID:10432900

  16. Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.

    PubMed

    Harvey, Ashley R; Carden, Randy L

    2009-08-01

    Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors. PMID:19831096

  17. Theta and Alpha Band Modulations Reflect Error-Related Adjustments in the Auditory Condensation Task

    PubMed Central

    Novikov, Nikita A.; Bryzgalov, Dmitri V.; Chernyshev, Boris V.

    2015-01-01

    Error commission leads to adaptive adjustments in a number of brain networks that subserve goal-directed behavior, resulting in either enhanced stimulus processing or increased motor threshold depending on the nature of errors committed. Here, we studied these adjustments by analyzing post-error modulations of alpha and theta band activity in the auditory version of the two-choice condensation task, which is highly demanding for sustained attention while involves no inhibition of prepotent responses. Errors were followed by increased frontal midline theta (FMT) activity, as well as by enhanced alpha band suppression in the parietal and the left central regions; parietal alpha suppression correlated with the task performance, left central alpha suppression correlated with the post-error slowing, and FMT increase correlated with both behavioral measures. On post-error correct trials, left-central alpha band suppression started earlier before the response, and the response was followed by weaker FMT activity, as well as by enhanced alpha band suppression distributed over the entire scalp. These findings indicate that several separate neuronal networks are involved in post-error adjustments, including the midfrontal performance monitoring network, the parietal attentional network, and the sensorimotor network. Supposedly, activity within these networks is rapidly modulated after errors, resulting in optimization of their functional state on the subsequent trials, with corresponding changes in behavioral measures. PMID:26733266

  18. An Empirical Study of the Relative Error Magnitude in Three Measures of Change.

    ERIC Educational Resources Information Center

    Williams, Richard H.; And Others

    1984-01-01

    This paper describes the procedures and results of two studies designed to yield empirical comparisons of the error magnitude in three change measures: the simple gain score, the residualized difference score, and the base free measure (Tucker et al). Residualized scores possessed smaller standard errors of measurement. (Author/BS)

  19. Accuracy of MicroRNA Discovery Pipelines in Non-Model Organisms Using Closely Related Species Genomes

    PubMed Central

    Etebari, Kayvan; Asgari, Sassan

    2014-01-01

    Mapping small reads to genome reference is an essential and more common approach to identify microRNAs (miRNAs) in an organism. Using closely related species genomes as proxy references can facilitate miRNA expression studies in non-model species that their genomes are not available. However, the level of error this introduces is mostly unknown, as this is the result of evolutionary distance between the proxy reference and the species of interest. To evaluate the accuracy of miRNA discovery pipelines in non-model organisms, small RNA library data from a mosquito, Aedes aegypti, were mapped to three well annotated insect genomes as proxy references using miRanalyzer with two strict and loose mapping criteria. In addition, another web-based miRNA discovery pipeline (DSAP) was used as a control for program performance. Using miRanalyzer, more than 80% reduction was observed in the number of mapped reads using strict criterion when proxy genome references were used; however, only 20% reduction was recorded for mapped reads to other species known mature miRNA datasets. Except a few changes in ranking, mapping criteria did not make any significant differences in the profile of the most abundant miRNAs in A. aegypti when its original or a proxy genome was used as reference. However, more variation was observed in miRNA ranking profile when DSAP was used as analysing tool. Overall, the results also suggested that using a proxy reference did not change the most abundant miRNAs’ differential expression profiles when infected or non-infected libraries were compared. However, usage of a proxy reference could provide about 67% of the original outcome from more extremely up- or down-regulated miRNA profiles. Although using closely related species genome incurred some losses in the number of miRNAs, the most abundant miRNAs along with their differential expression profile would be acceptable based on the sensitivity level of each project. PMID:24404190

  20. Relative effects of demand and control on task-related cardiovascular reactivity, task perceptions, performance accuracy, and mood.

    PubMed

    Flynn, Niamh; James, Jack E

    2009-05-01

    The hypothesis that work control has beneficial effects on well-being is the basis of the widely applied, yet inconsistently supported, Job Demand Control (JDC) Model [Karasek, R.A., 1979. Job demands, job decision latitude and mental strain: Implications for job redesign. Adm. Sci. Q. 24, 285-308.; Karasek, R., Theorell, T., 1990. Healthy Work: Stress, Productivity, and the Reconstruction of Working Life. Basic Books, Oxford]. The model was tested in an experiment (N=60) using a cognitive stressor paradigm that sought to prevent confounding between demand and control. High-demand was found to be associated with deleterious effects on physiological, subjective, and performance outcomes. In contrast, few main effects were found for control. Evidence for the buffer interpretation of the JDC Model was limited to a significant demand-control interaction for performance accuracy, whereas substantial support was found for the strain interpretation of the model [van der Doef, M., Maes, S., 1998. The job demand-control(-support) model and physical health outcomes: A review of the strain and buffer hypotheses. Psychol. Health 13, 909-936., van der Doef, M., Maes, S., 1999. The Job Demand-Control(-Support) model and psychological well-being: A review of 20 years of empirical research. Work Stress 13, 87-114]. Manipulation checks revealed that objective control altered perceptions of control but not perceptions of demand. It is suggested that beneficial effects of work-related control are unlikely to occur in the absence of reductions in perceived demand. Thus, contrary to the propositions of Karasek and colleagues, demand and control do not appear to be independent factors. PMID:19118584

  1. Reduced Error-Related Activation in Two Anterior Cingulate Circuits Is Related to Impaired Performance in Schizophrenia

    ERIC Educational Resources Information Center

    Polli, Frida E.; Barton, Jason J. S.; Thakkar, Katharine N.; Greve, Douglas N.; Goff, Donald C.; Rauch, Scott L.; Manoach, Dara S.

    2008-01-01

    To perform well on any challenging task, it is necessary to evaluate your performance so that you can learn from errors. Recent theoretical and experimental work suggests that the neural sequellae of error commission in a dorsal anterior cingulate circuit index a type of contingency- or reinforcement-based learning, while activation in a rostral…

  2. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    PubMed Central

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy. PMID:23576835

  3. Data on simulated interpersonal touch, individual differences and the error-related negativity

    PubMed Central

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J.; Koole, Sander L.

    2016-01-01

    The dataset includes data from the electroencephalogram study reported in our paper: ‘Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity’ (doi:10.1016/j.neulet.2016.01.044) (Tjew-A-Sin et al., 2016) [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory) measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg. PMID:27158644

  4. Data on simulated interpersonal touch, individual differences and the error-related negativity.

    PubMed

    Tjew-A-Sin, Mandy; Tops, Mattie; Heslenfeld, Dirk J; Koole, Sander L

    2016-06-01

    The dataset includes data from the electroencephalogram study reported in our paper: 'Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity' (doi:10.1016/j.neulet.2016.01.044) (Tjew-A-Sin et al., 2016) [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory) measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg. PMID:27158644

  5. Hysteresis and Related Error Mechanisms in the NIST Watt Balance Experiment

    PubMed Central

    Schwarz, Joshua P.; Liu, Ruimin; Newell, David B.; Steiner, Richard L.; Williams, Edwin R.; Smith, Douglas; Erdemir, Ali; Woodford, John

    2001-01-01

    The NIST watt balance experiment is being completely rebuilt after its 1998 determination of the Planck constant. That measurement yielded a result with an approximately 1×10−7 relative standard uncertainty. Because the goal of the new incarnation of the experiment is a ten-fold decrease in uncertainty, it has been necessary to reexamine many sources of systematic error. Hysteresis effects account for a substantial portion of the projected uncertainty budget. They arise from mechanical, magnetic, and thermal sources. The new experiment incorporates several improvements in the apparatus to address these issues, including stiffer components for transferring the mass standard on and off the balance, better servo control of the balance, better pivot materials, and the incorporation of erasing techniques into the mass transfer servo system. We have carried out a series of tests of hysteresis sources on a separate system, and apply their results to the watt apparatus. The studies presented here suggest that our improvements can be expected to reduce hysteresis signals by at least a factor of 10—perhaps as much as a factor of 50—over the 1998 experiment.

  6. Unified error model based spatial error compensation for four types of CNC machining center: Part II-unified model based spatial error compensation

    NASA Astrophysics Data System (ADS)

    Fan, Kaiguo; Yang, Jianguo; Yang, Liyan

    2014-12-01

    In this paper, a spatial error compensation method was proposed for CNC machining center based on the unified error model. The spatial error distribution was analyzed in this research. The result shows that the spatial error is relative to each axis of a CNC machine tool. Moreover, the spatial error distribution is non-linear and there is no regularity. In order to improve the modeling accuracy and efficiency, an automatic error modeling application was designed based on the orthogonal polynomials. To realize the spatial error compensation, a multi-thread parallel processing mode based error compensation controller was designed. Using the spatial error compensation method, the machine tools' accuracy is greatly improved compared to that with no compensation.

  7. How Does Speed and Accuracy in Reading Relate to Reading Comprehension in Arabic?

    ERIC Educational Resources Information Center

    Abu-Leil, Aula Khateeb; Share, David L.; Ibrahim, Raphiq

    2014-01-01

    The purpose of this study was to investigate the potential contribution of decoding efficiency to the development of reading comprehension among skilled adult native Arabic speakers. In addition, we tried to investigate the influence of Arabic vowels on reading accuracy, reading speed, and therefore to reading comprehension. Seventy-five Arabic…

  8. Lexical and Child-Related Factors in Word Variability and Accuracy in Infants

    ERIC Educational Resources Information Center

    Macrae, Toby

    2013-01-01

    The present study investigated the effects of lexical age of acquisition (AoA), phonological complexity, age and expressive vocabulary on spoken word variability and accuracy in typically developing infants, aged 1;9-3;1. It was hypothesized that later-acquired words and those with more complex speech sounds would be produced more variably and…

  9. Research concerning the geophysical causes and measurement accuracies related to the irregularities in the rotation of the earth

    NASA Technical Reports Server (NTRS)

    Currie, D. G.

    1978-01-01

    The primary objective of this effort consisted of a detailed study of the history of the motion of the moon. Several analyses were developed which are related to the determination of the effect of various refractive phenomena on the accuracy of measurements made through the earth's atmosphere.

  10. 26 CFR 1.6664-1 - Accuracy-related and fraud penalties; definitions, effective date and special rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... to the Tax, Additional Amounts, and Assessable Penalties § 1.6664-1 Accuracy-related and fraud... provided in the last sentence of this paragraph (b)(2), § 1.6664-4 (as contained in 26 CFR part 1 revised... after December 8, 1994, § 1.6664-4 (as contained in 26 CFR part 1 revised April 1, 1995) is...

  11. 26 CFR 1.6662-7 - Omnibus Budget Reconciliation Act of 1993 changes to the accuracy-related penalty.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Omnibus Budget Reconciliation Act of 1993 changes to the accuracy-related penalty. 1.6662-7 Section 1.6662-7 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties §...

  12. Maternal Accuracy and Behavior in Anticipating Children's Responses to Novelty: Relations to Fearful Temperament and Implications for Anxiety Development

    ERIC Educational Resources Information Center

    Kiel, Elizabeth J.; Buss, Kristin A.

    2010-01-01

    Previous research has suggested that mothers' behaviors may serve as a mechanism in the development from toddler fearful temperament to childhood anxiety. The current study examined the maternal characteristic of accuracy in predicting toddlers' distress reactions to novelty in relation to temperament, parenting, and anxiety development.…

  13. Children's Age-Related Speed--Accuracy Strategies in Intercepting Moving Targets in Two Dimensions

    ERIC Educational Resources Information Center

    Rothenberg-Cunningham, Alek; Newell, Karl M.

    2013-01-01

    Purpose: This study investigated the age-related speed--accuracy strategies of children, adolescents, and adults in performing a rapid striking task that allowed the self-selection of the interception position in a virtual, two-dimensional environment. Method: The moving target had curvilinear trajectories that were determined by combinations of…

  14. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring

    PubMed Central

    Larson, Michael J.; Steffen, Patrick R.; Primosch, Mark

    2013-01-01

    Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN) and error positivity (Pe) components of the scalp-recorded event-related potential (ERP) represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group) to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state vs. trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring. PMID:23847491

  15. Assessment of Relative Accuracy of AHN-2 Laser Scanning Data Using Planar Features

    PubMed Central

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm. PMID:22163650

  16. Assessment of relative accuracy of AHN-2 laser scanning data using planar features.

    PubMed

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm. PMID:22163650

  17. Impaired rapid error monitoring but intact error signaling following rostral anterior cingulate cortex lesions in humans

    PubMed Central

    Maier, Martin E.; Di Gregorio, Francesco; Muricchio, Teresa; Di Pellegrino, Giuseppe

    2015-01-01

    Detecting one’s own errors and appropriately correcting behavior are crucial for efficient goal-directed performance. A correlate of rapid evaluation of behavioral outcomes is the error-related negativity (Ne/ERN) which emerges at the time of the erroneous response over frontal brain areas. However, whether the error monitoring system’s ability to distinguish between errors and correct responses at this early time point is a necessary precondition for the subsequent emergence of error awareness remains unclear. The present study investigated this question using error-related brain activity and vocal error signaling responses in seven human patients with lesions in the rostral anterior cingulate cortex (rACC) and adjoining ventromedial prefrontal cortex, while they performed a flanker task. The difference between errors and correct responses was severely attenuated in these patients indicating impaired rapid error monitong, but they showed no impairment in error signaling. However, impaired rapid error monitoring coincided with a failure to increase response accuracy on trials following errors. These results demonstrate that the error monitoring system’s ability to distinguish between errors and correct responses at the time of the response is crucial for adaptive post-error adjustments, but not a necessary precondition for error awareness. PMID:26136674

  18. Coupling Modified Constitutive Relation Error, Model Reduction and Kalman Filtering Algorithms for Real-Time Parameters Identification

    NASA Astrophysics Data System (ADS)

    Marchand, Basile; Chamoin, Ludovic; Rey, Christian

    2015-11-01

    In this work we propose a new identification strategy based on the coupling between a probabilistic data assimilation method and a deterministic inverse problem approach using the modified Constitutive Relation Error energy functional. The idea is thus to offer efficient identification despite of highly corrupted data for time-dependent systems. In order to perform real-time identification, the modified Constitutive Relation Error is here associated to a model reduction method based on Proper Generalized Decomposition. The proposed strategy is applied to two thermal problems with identification of time-dependent boundary conditions, or material parameters.

  19. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    SciTech Connect

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  20. Comparison of Diagnostic Accuracy of Clinical Measures of Breast Cancer–Related Lymphedema: Area Under the Curve

    PubMed Central

    Smoot, Betty J.; Wong, Josephine F.; Dodd, Marylin J.

    2013-01-01

    Objective To compare diagnostic accuracy of measures of breast cancer–related lymphedema (BCRL). Design Cross-sectional design comparing clinical measures with the criterion standard of previous diagnosis of BCRL. Setting University of California San Francisco Translational Science Clinical Research Center. Participants Women older than 18 years and more than 6 months posttreatment for breast cancer (n=141; 70 with BCRL, 71 without BCRL). Interventions Not applicable. Main Outcome Measures Sensitivity, specificity, receiver operator characteristic curve, and area under the curve (AUC) were used to evaluate accuracy. Results A total of 141 women were categorized as having (n=70) or not having (n=71) BCRL based on past diagnosis by a health care provider, which was used as the reference standard. Analyses of ROC curves for the continuous outcomes yielded AUC of .68 to .88 (P<.001); of the physical measures bioimpedance spectroscopy yielded the highest accuracy with an AUC of .88 (95% confidence interval, .80–.96) for women whose dominant arm was the affected arm. The lowest accuracy was found using the 2-cm diagnostic cutoff score to identify previously diagnosed BCRL (AUC, .54–.65). Conclusions Our findings support the use of bioimpedance spectroscopy in the assessment of existing BCRL. Refining diagnostic cutoff values may improve accuracy of diagnosis and warrant further investigation. PMID:21440706

  1. The theoretical accuracy of Runge-Kutta time discretizations for the initial boundary value problem: A careful study of the boundary error

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun

    1993-01-01

    The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.

  2. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  3. On the validity of 3D polymer gel dosimetry: III. MRI-related error sources.

    PubMed

    Vandecasteele, Jan; De Deene, Yves

    2013-01-01

    In MRI (PAGAT) polymer gel dosimetry, there exists some controversy on the validity of 3D dose verifications of clinical treatments. The relative contribution of important sources of uncertainty in MR scanning to the overall accuracy and precision of 3D MRI polymer gel dosimetry is quantified in this study. The performance in terms of signal-to-noise and imaging artefacts was evaluated on three different MR scanners (two 1.5 T and a 3 T scanner). These include: (1) B₀-field inhomogeneity, (2) B₁-field inhomogeneity, (3) dielectric effects (losses and standing waves) and (4) temperature inhomogeneity during scanning. B₀-field inhomogeneities that amount to maximum 5 ppm result in dose deviations of up to 4.3% and deformations of up to 5 pixels. Compensation methods are proposed. B₁-field inhomogeneities were found to induce R₂ variations in large anthropomorphic phantoms both at 1.5 and 3 T. At 1.5 T these effects are mainly caused by the coil geometry resulting in dose deviations of up to 25%. After the correction of the R₂ maps using a heuristic flip angle-R₂ relation, these dose deviations are reduced to 2.4%. At 3 T, the dielectric properties of the gel phantoms are shown to strongly influence B₁-field homogeneity, hence R₂ homogeneity, especially of large anthropomorphic phantoms. The low electrical conductivity of polymer gel dosimeters induces standing wave patterns resulting in dose deviations up to 50%. Increasing the conductivity of the gel by adding NaCl reduces the dose deviation to 25% after which the post-processing is successful in reducing the remaining inhomogeneities caused by the coil geometry to within 2.4%. The measurements are supported by computational modelling of the B₁-field. Finally, temperature fluctuations of 1 °C frequently encountered in clinical MRI scanners result in dose deviations up to 15%. It is illustrated that with adequate temperature stabilization, the dose uncertainty is reduced to within 2.58%. PMID

  4. On the validity of 3D polymer gel dosimetry: III. MRI-related error sources

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Jan; De Deene, Yves

    2013-01-01

    In MRI (PAGAT) polymer gel dosimetry, there exists some controversy on the validity of 3D dose verifications of clinical treatments. The relative contribution of important sources of uncertainty in MR scanning to the overall accuracy and precision of 3D MRI polymer gel dosimetry is quantified in this study. The performance in terms of signal-to-noise and imaging artefacts was evaluated on three different MR scanners (two 1.5 T and a 3 T scanner). These include: (1) B0-field inhomogeneity, (2) B1-field inhomogeneity, (3) dielectric effects (losses and standing waves) and (4) temperature inhomogeneity during scanning. B0-field inhomogeneities that amount to maximum 5 ppm result in dose deviations of up to 4.3% and deformations of up to 5 pixels. Compensation methods are proposed. B1-field inhomogeneities were found to induce R2 variations in large anthropomorphic phantoms both at 1.5 and 3 T. At 1.5 T these effects are mainly caused by the coil geometry resulting in dose deviations of up to 25%. After the correction of the R2 maps using a heuristic flip angle-R2 relation, these dose deviations are reduced to 2.4%. At 3 T, the dielectric properties of the gel phantoms are shown to strongly influence B1-field homogeneity, hence R2 homogeneity, especially of large anthropomorphic phantoms. The low electrical conductivity of polymer gel dosimeters induces standing wave patterns resulting in dose deviations up to 50%. Increasing the conductivity of the gel by adding NaCl reduces the dose deviation to 25% after which the post-processing is successful in reducing the remaining inhomogeneities caused by the coil geometry to within 2.4%. The measurements are supported by computational modelling of the B1-field. Finally, temperature fluctuations of 1 °C frequently encountered in clinical MRI scanners result in dose deviations up to 15%. It is illustrated that with adequate temperature stabilization, the dose uncertainty is reduced to within 2.58%. Both authors contributed

  5. Accuracy and sources of error of out-of field dose calculations by a commercial treatment planning system for intensity-modulated radiation therapy treatments.

    PubMed

    Huang, Jessie Y; Followill, David S; Wang, Xin A; Kry, Stephen F

    2013-01-01

    Although treatment planning systems are generally thought to have poor accuracy for out-of-field dose calculations, little work has been done to quantify this dose calculation inaccuracy for modern treatment techniques, such as intensity-modulated radiation therapy (IMRT), or to understand the sources of this inaccuracy. The aim of this work is to evaluate the accuracy of out-of-field dose calculations by a commercial treatment planning system (TPS), Pinnacle3 v.9.0, for IMRT treatment plans. Three IMRT plans were delivered to anthropomorphic phantoms, and out-of-field doses were measured using thermoluminescent detectors (TLDs). The TLD-measured dose was then compared to the TPS-calculated dose to quantify the accuracy of TPS calculations at various distances from the field edge and out-of-field anatomical locations of interest (i.e., radiosensitive organs). The individual components of out-of-field dose (patient scatter, collimator scatter, and head leakage) were also calculated in Pinnacle and compared to Monte Carlo simulations for a 10 × 10 cm2 field. Our results show that the treatment planning system generally underestimated the out-of-field dose and that this underestimation worsened (accuracy decreased) for increasing distances from the field edge. For the three IMRT treatment plans investigated, the TPS underestimated the dose by an average of 50%. Our results also showed that collimator scatter was underestimated by the TPS near the treatment field, while all components of out-of-field dose were severely underestimated at greater distances from the field edge. This study highlights the limitations of commercial treatment planning systems in calculating out-of-field dose and provides data about the level of accuracy, or rather inaccuracy, that can be expected for modern IMRT treatments. Based on our results, use of the TPS-reported dose could lead to an underestimation of secondary cancer induction risk, as well as poor clinical decision-making for

  6. The Relation between Content and Structure in Language Production: An Analysis of Speech Errors in Semantic Dementia

    ERIC Educational Resources Information Center

    Meteyard, Lotte; Patterson, Karalyn

    2009-01-01

    In order to explore the impact of a degraded semantic system on the structure of language production, we analysed transcripts from autobiographical memory interviews to identify naturally-occurring speech errors by eight patients with semantic dementia (SD) and eight age-matched normal speakers. Relative to controls, patients were significantly…

  7. ERN and the Placebo: A Misattribution Approach to Studying the Arousal Properties of the Error-Related Negativity

    ERIC Educational Resources Information Center

    Inzlicht, Michael; Al-Khindi, Timour

    2012-01-01

    Performance monitoring in the anterior cingulate cortex (ACC) has largely been viewed as a cognitive, computational process devoid of emotion. A growing body of research, however, suggests that performance is moderated by motivational engagement and that a signal generated by the ACC, the error-related negativity (ERN), may partially reflect a…

  8. A highly efficient error analysis program for the evaluation of spacecraft tests of general relativity with application to solar probes

    NASA Technical Reports Server (NTRS)

    Anderson, J. D.; Lau, E. K.; Georgevic, R. M.

    1973-01-01

    A computer program is described which can be used to study the feasibility of conducting relativity experiments on a wide range of hypothetical space missions, and a few applications are presented for solar probes which approach the Sun within 0.25 to 0.35 AU. It is assumed that radio ranging data are available from these spacecraft, and that accuracies on the order of 15 meters can be achieved. This is compatible with current accuracies of ranging to Mariner spacecraft. At this level of accuracy, the range data are sensitive to a number of effects, and for this reason it has been necessary to include a total of up to 23 parameters in the feasibility studies, even though there are only two parameters of real interest in the relativity experiments.

  9. Individual Differences in Working Memory Capacity Predict Action Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Miller, A. Eve; Watson, Jason M.; Strayer, David L.

    2012-01-01

    Neuroscience suggests that the anterior cingulate cortex (ACC) is responsible for conflict monitoring and the detection of errors in cognitive tasks, thereby contributing to the implementation of attentional control. Though individual differences in frontally mediated goal maintenance have clearly been shown to influence outward behavior in…

  10. Children's Perseverative Appearance-Reality Errors Are Related to Emerging Language Skills.

    ERIC Educational Resources Information Center

    Deak, Gedeon O.; Ray, Shanna D.; Brenneman, Kimberly

    2003-01-01

    Two experiments examined the communicative bases of preschoolers' object appearance-reality (AR) errors. Found that AR performance correlated positively with performance on a control test with the same discourse structure but nondeceptive stimuli, and on a naming test. Overall findings indicated that the discourse structure of AR tests elicits a…

  11. Can students evaluate their understanding of cause-and-effect relations? The effects of diagram completion on monitoring accuracy.

    PubMed

    van Loon, Mariëtte H; de Bruin, Anique B H; van Gog, Tamara; van Merriënboer, Jeroen J G; Dunlosky, John

    2014-09-01

    For effective self-regulated study of expository texts, it is crucial that learners can accurately monitor their understanding of cause-and-effect relations. This study aimed to improve adolescents' monitoring accuracy using a diagram completion task. Participants read six texts, predicted performance, selected texts for restudy, and were tested for comprehension. Three groups were compared, in which learners either completed causal diagrams immediately after reading, completed them after a delay, or received no-diagram control instructions. Accuracy of predictions of performance was highest for learning of causal relations following delayed diagram completion. Completing delayed diagrams focused learners specifically on their learning of causal relations, so this task did not improve monitoring of learning of factual information. When selecting texts for restudy, the participants followed their predictions of performance to the same degree, regardless of monitoring accuracy. Fine-grained analyses also showed that, when completing delayed diagrams, learners based judgments on diagnostic cues that indicated actual understanding of connections between events in the text. Most important, delayed diagram completion can improve adolescents' ability to monitor their learning of cause-and-effect relations. PMID:24977937

  12. Motoneuron axon pathfinding errors in zebrafish: differential effects related to concentration and timing of nicotine exposure.

    PubMed

    Menelaou, Evdokia; Paul, Latoya T; Perera, Surangi N; Svoboda, Kurt R

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15-30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. PMID:25668718

  13. Plasma components affect accuracy of circulating cancer-related microRNA quantitation.

    PubMed

    Kim, Dong-Ja; Linnstaedt, Sarah; Palma, Jaime; Park, Joon Cheol; Ntrivalas, Evangelos; Kwak-Kim, Joanne Y H; Gilman-Sachs, Alice; Beaman, Kenneth; Hastings, Michelle L; Martin, Jeffrey N; Duelli, Dominik M

    2012-01-01

    Circulating microRNAs (miRNAs) have emerged as candidate biomarkers of various diseases and conditions including malignancy and pregnancy. This approach requires sensitive and accurate quantitation of miRNA concentrations in body fluids. Herein we report that enzyme-based miRNA quantitation, which is currently the mainstream approach for identifying differences in miRNA abundance among samples, is skewed by endogenous serum factors that co-purify with miRNAs and anticoagulant agents used during collection. Of importance, different miRNAs were affected to varying extent among patient samples. By developing measures to overcome these interfering activities, we increased the accuracy, and improved the sensitivity of miRNA detection up to 30-fold. Overall, the present study outlines key factors that prevent accurate miRNA quantitation in body fluids and provides approaches that enable faithful quantitation of miRNA abundance in body fluids. PMID:22154918

  14. Error-related brain activity in youth and young adults before and after treatment for generalized or social anxiety disorder.

    PubMed

    Kujawa, Autumn; Weinberg, Anna; Bunford, Nora; Fitzgerald, Kate D; Hanna, Gregory L; Monk, Christopher S; Kennedy, Amy E; Klumpp, Heide; Hajcak, Greg; Phan, K Luan

    2016-11-01

    Increased error monitoring, as measured by the error-related negativity (ERN), has been shown to persist after treatment for obsessive-compulsive disorder in youth and adults; however, no previous studies have examined the ERN following treatment for related anxiety disorders. We used a flanker task to elicit the ERN in 28 youth and young adults (8-26years old) with primary diagnoses of generalized anxiety disorder (GAD) or social anxiety disorder (SAD) and 35 healthy controls. Patients were assessed before and after treatment with cognitive-behavioral therapy (CBT) or selective serotonin reuptake inhibitors (SSRI), and healthy controls were assessed at a comparable interval. The ERN increased across assessments in the combined sample. Patients with SAD exhibited an enhanced ERN relative to healthy controls prior to and following treatment, even when analyses were limited to SAD patients who responded to treatment. Patients with GAD did not significantly differ from healthy controls at either assessment. Results provide preliminary evidence that enhanced error monitoring persists following treatment for SAD in youth and young adults, and support conceptualizations of increased error monitoring as a trait-like vulnerability that may contribute to risk for recurrence and impaired functioning later in life. Future work is needed to further evaluate the ERN in GAD across development, including whether an enhanced ERN develops in adulthood or is most apparent when worries focus on internal sources of threat. PMID:27495356

  15. An Error-Related Negativity Potential Investigation of Response Monitoring Function in Individuals with Internet Addiction Disorder

    PubMed Central

    Zhou, Zhenhe; Li, Cui; Zhu, Hongmei

    2013-01-01

    Internet addiction disorder (IAD) is an impulse disorder or at least related to impulse control disorder. Deficits in executive functioning, including response monitoring, have been proposed as a hallmark feature of impulse control disorders. The error-related negativity (ERN) reflects individual’s ability to monitor behavior. Since IAD belongs to a compulsive-impulsive spectrum disorder, theoretically, it should present response monitoring functional deficit characteristics of some disorders, such as substance dependence, ADHD, or alcohol abuse, testing with an Erikson flanker task. Up to now, no studies on response monitoring functional deficit in IAD were reported. The purpose of the present study was to examine whether IAD displays response monitoring functional deficit characteristics in a modified Erikson flanker task. Twenty-three subjects were recruited as IAD group. Twenty-three matched age, gender, and education healthy persons were recruited as control group. All participants completed the modified Erikson flanker task while measured with event-related potentials. IAD group made more total error rates than did controls (p < 0.01); Reactive times for total error responses in IAD group were shorter than did controls (p < 0.01). The mean ERN amplitudes of total error response conditions at frontal electrode sites and at central electrode sites of IAD group were reduced compared with control group (all p < 0.01). These results revealed that IAD displays response monitoring functional deficit characteristics and shares ERN characteristics of compulsive-impulsive spectrum disorder. PMID:24093009

  16. Aerosol size distribution retrievals from sunphotometer measurements: Theoretical evaluation of errors due to circumsolar and related effects

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Gueymard, Christian A.

    2012-05-01

    The uncertainty in particle size distribution retrievals is analyzed theoretically and numerically when using aerosol optical depth (AOD) data affected by three distinct error-inducing effects. Specifically, circumsolar radiation (CS), optical mass (OM), and solar disk's brightness distribution (BD) effects are taken into consideration here. Because of these effects, the theoretical AOD is affected by an error, ∂AOD, that consequently translates into errors in the determined (apparent) particle size distribution (PSD). Through comparison of the apparent and the true size distributions, the relative error, ∂PSD, is calculated here as a function of particle radius for various instrument's fields of view (aperture) and solar zenith angles. It is shown that, in general, the CS effect overestimates the number of submicron-sized particles, and that the significance of this effect increases with the aperture. In case of maritime aerosols, the CS effect may also lead to an underestimation of the number concentration of large micron-sized particles. The BD and OM effects become important, and possibly predominant, when AOD is low. Assuming large particles dominate in the atmosphere, the BD effect tends to underestimate the concentration of the smallest aerosol particles. In general, the PSD(apparent)/PSD(true) ratio is affected by the CS effect equally over all particle sizes. The relative errors in PSD are typically smaller than 40-60%, but can exceptionally exceed 100%, which means that the apparent PSD may then be twice as large as the true PSD. This extreme situation typically occurs with maritime aerosols under elevated humidity conditions. Recent instruments tend to be designed with smaller apertures than ever before, which lower the CS-induced errors to an acceptable level in most cases.

  17. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  18. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  19. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  20. Accuracy of q-space related parameters in MRI: simulations and phantom measurements.

    PubMed

    Lätt, Jimmy; Nilsson, Markus; Malmborg, Carin; Rosquist, Hannah; Wirestam, Ronnie; Ståhlberg, Freddy; Topgaard, Daniel; Brockstedt, Sara

    2007-11-01

    The accuracy of q-space measurements was evaluated at a 3.0-T clinical magnetic resonance imaging (MRI) scanner, as compared with a 4.7-T nuclear magnetic resonance (NMR) spectrometer. Measurements were performed using a stimulated-echo pulse-sequence on n-decane as well as on polyethylene glycol (PEG) mixed with different concentrations of water, in order to obtain bi-exponential signal decay curves. The diffusion coefficients as well as the modelled diffusional kurtosis K(fit) were obtained from the signal decay curve, while the full-width at half-maximum (FWHM) and the diffusional kurtosis K were obtained from the displacement distribution. Simulations of restricted diffusion, under conditions similar to those obtainable with a clinical MRI scanner, were carried out assuming various degrees of violation of the short gradient pulse (SGP) condition and of the long diffusion time limit. The results indicated that an MRI system can not be used for quantification of structural sizes less than about 10 microm by means of FWHM since the parameter underestimates the confinements due to violation of the SGP condition. However, FWHM can still be used as an important contrast parameter. The obtained kurtosis values were lower than expected from theory and the results showed that care must be taken when interpreting a kurtosis estimate deviating from zero. PMID:18041259

  1. Investigation of Reversal Errors in Reading in Normal and Poor Readers as Related to Critical Factors in Reading Materials. Final Report.

    ERIC Educational Resources Information Center

    Liberman, Isabelle Y.; Shankweiler, Donald

    Reversals in poor and normal second-grade readers were studied in relation to their whole phonological error pattern in reading real words and nonsense syllables. Error categories included sequence and orientation reversals, other consonants, vowels, and total error. Reversals occurred in quantity only in poor readers, with large individual…

  2. Mood, personality, and self-monitoring: negative affect and emotionality in relation to frontal lobe mechanisms of error monitoring.

    PubMed

    Luu, P; Collins, P; Tucker, D M

    2000-03-01

    A fundamental question in frontal lobe function is how motivational and emotional parameters of behavior apply to executive processes. Recent advances in mood and personality research and the technology and methodology of brain research provide opportunities to address this question empirically. Using event-related-potentials to track error monitoring in real time, the authors demonstrated that variability in the amplitude of the error-related negativity (ERN) is dependent on mood and personality variables. College students who are high on negative affect (NA) and negative emotionality (NEM) displayed larger ERN amplitudes early in the experiment than participants who are low on these dimensions. As the high-NA and -NEM participants disengaged from the task, the amplitude of the ERN decreased. These results reveal that affective distress and associated behavioral patterns are closely related with frontal lobe executive functions. PMID:10756486

  3. Evaluation of measurement errors of temperature and relative humidity from HOBO data logger under different conditions of exposure to solar radiation.

    PubMed

    da Cunha, Antonio Ribeiro

    2015-05-01

    This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained. PMID:25855203

  4. Unintentional Pharmaceutical-Related Medication Errors Caused by Laypersons Reported to the Toxicological Information Centre in the Czech Republic.

    PubMed

    Urban, Michal; Leššo, Roman; Pelclová, Daniela

    2016-07-01

    The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists. PMID:26990237

  5. NASA hydrogen maser accuracy and stability in relation to world standards

    NASA Technical Reports Server (NTRS)

    Peters, H. E.; Percival, D. B.

    1973-01-01

    Frequency comparisons were made among five NASA hydrogen masers in 1969 and again in 1972 to a precision of one part in 10 to the 13th power. Frequency comparisons were also made between these masers and the cesium-beam ensembles of several international standards laboratories. The hydrogen maser frequency stabilities as related to IAT were comparable to the frequency stabilities of individual time scales with respect to IAT. The relative frequency variations among the NASA masers, measured after the three-year interval, were 2 + or - 2 parts in 10 to the 13th power. Thus time scales based on hydrogen masers would have excellent long-term stability and uniformity.

  6. Accuracy of radionuclide ventriculography for estimation of left ventricular volume changes and end-systolic pressure-volume relations

    SciTech Connect

    Kronenberg, M.W.; Parrish, M.D.; Jenkins, D.W. Jr.; Sandler, M.P.; Friesinger, G.C.

    1985-11-01

    Estimation of left ventricular end-systolic pressure-volume relations depends on the accurate measurement of small changes in ventricular volume. To study the accuracy of radionuclide ventriculography, paired radionuclide and contrast ventriculograms were obtained in seven dogs during a control period and when blood pressure was increased in increments of 30 mm Hg by phenylephrine infusion. The heart rate was held constant by atropine infusion. The correlation between radionuclide and contrast ventriculography was excellent. The systolic pressure-volume relations were linear for both radionuclide and contrast ventriculography. The mean slope for radionuclide ventriculography was lower than the mean slope for contrast ventriculography; however, the slopes correlated well. The radionuclide-contrast volume relation was compared using background subtraction, attenuation correction, neither of these or both. By each method, radionuclide ventriculography was valid for measuring small changes in left ventricular volume and for defining end-systolic pressure-volume relations.

  7. Nonlinear Advection Algorithms Applied to Inter-related Tracers: Errors and Implications for Modeling Aerosol-Cloud Interactions

    SciTech Connect

    Ovtchinnikov, Mikhail; Easter, Richard C.

    2009-02-01

    Monotonicity constraints and gradient preserving flux corrections employed by many advection algorithms used in atmospheric models make these algorithms non-linear. Consequently, any relations among model variables transported separately are not necessarily preserved in such models. These errors cannot be revealed by traditional algorithm testing based on advection of a single tracer. New type of tests are developed and conducted to evaluate the preservation of a sum of several number mixing ratios advected independently of each other, as is the case, for example, in models using bin or sectional representation of aerosol or cloud particle size distribution. The tests show that when three tracers are advected in 1D uniform constant velocity flow, local errors in the sum can be on the order of 10%. When cloud-like interactions are allowed among the tracers, errors in total sum of three mixing ratios can reach up to 30%. Several approaches to eliminate the error are suggested, all based on advecting the sum as a separate variable and then normalizing mixing ratios for individual tracers to match the total sum. A simple scalar normalization preserves the total number mixing ratio and positive definiteness of the variables but the monotonicity constraint for individual tracers is no longer maintained. More involved flux normalization procedures are developed for the flux based advection algorithms to maintain the monotonicity for individual scalars and their sum.

  8. Localising the auditory N1m with event-related beamformers: localisation accuracy following bilateral and unilateral stimulation

    PubMed Central

    Gascoyne, Lauren; Furlong, Paul L.; Hillebrand, Arjan; Worthen, Siân F.; Witton, Caroline

    2016-01-01

    The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses. PMID:27545435

  9. Localising the auditory N1m with event-related beamformers: localisation accuracy following bilateral and unilateral stimulation.

    PubMed

    Gascoyne, Lauren; Furlong, Paul L; Hillebrand, Arjan; Worthen, Siân F; Witton, Caroline

    2016-01-01

    The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses. PMID:27545435

  10. The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals

    PubMed Central

    Westbrook, Johanna I; Baysari, Melissa T; Li, Ling; Burke, Rosemary; Richardson, Katrina L; Day, Richard O

    2013-01-01

    Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk. PMID:23721982

  11. Approximating relational observables by absolute quantities: a quantum accuracy-size trade-off

    NASA Astrophysics Data System (ADS)

    Miyadera, Takayuki; Loveridge, Leon; Busch, Paul

    2016-05-01

    The notion that any physical quantity is defined and measured relative to a reference frame is traditionally not explicitly reflected in the theoretical description of physical experiments where, instead, the relevant observables are typically represented as ‘absolute’ quantities. However, the emergence of the resource theory of quantum reference frames as a new branch of quantum information science in recent years has highlighted the need to identify the physical conditions under which a quantum system can serve as a good reference. Here we investigate the conditions under which, in quantum theory, an account in terms of absolute quantities can provide a good approximation of relative quantities. We find that this requires the reference system to be large in a suitable sense.

  12. Pushing the relative mass accuracy limit of ISOLTRAP on exotic nuclei below 10 ppb

    NASA Astrophysics Data System (ADS)

    Blaum, K.; Beck, D.; Bollen, G.; Herfurth, F.; Kellerbauer, A.; Kluge, H.-J.; Moore, R. B.; Sauvan, E.; Scheidenberger, C.; Schwarz, S.; Schweikhard, L.

    2003-05-01

    The Penning trap mass spectrometer ISOLTRAP plays a leading role in mass spectrometry of short-lived nuclides. The recent installation of a radio-frequency quadrupole trap and a carbon cluster ion source allowed for the first time mass measurements on exotic nuclei with a relative uncertainty of δ m/ m≈1×10 -8. The status of ISOLTRAP mass spectrometry and recent highlights are presented.

  13. Information from later lactations improves accuracy of genomic predictions of fertility-related disorders in Norwegian Red.

    PubMed

    Haugaard, Katrine; Svendsen, Morten; Heringstad, Bjørg

    2015-07-01

    Our aim was to investigate whether including information from later lactations improves accuracy of genomic breeding values for 4 fertility-related disorders: cystic ovaries, retained placenta, metritis, and silent heat. Data consisted of health records from 6,015,245 lactations from 2,480,976 Norwegian Red cows, recorded from 1979 to 2012. These were daughters of 3,675 artificial insemination bulls. The mean frequency of these disorders for cows in lactation 1 to 5 ranged from 0.6 to 2.4% for cystic ovaries, 1.0 to 1.5% for metritis, 1.9 to 4.1% for retained placenta, and 2.4 to 3.8% for silent heat. Genomic information was available for all sires, and the 312 youngest bulls were used for validation. After standard editing of a 25K/54K single nucleotide polymorphism data set that was imputed both ways, a total of 48,249 single nucleotide polymorphism loci were available for genomic predictions. Genomic breeding values were predicted using univariate genomic BLUP for the first lactation only and for the first 5 lactations and multivariate genomic BLUP with 5 lactations for each disorder was also used for genomic predictions. Correlations between estimated breeding values for the 4 traits in 5 lactations with predicted genomic breeding values were compared. Accuracy ranged from 0.47 and 0.51 for cystic ovaries, 0.50 to 0.74 for retained placenta, 0.21 to 0.47 for metritis, and 0.22 to 0.60 for silent heat. Including later lactations in a multitrait genomic BLUP improved accuracy of genomic estimated breeding values for cystic ovaries, retained placenta, and silent heat, whereas for metritis no obvious advantage in accuracy was found. PMID:25912869

  14. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    PubMed

    Weng, Jian; Dong, Shanshan; He, Hongjian; Chen, Feiyan; Peng, Xiaogang

    2015-01-01

    Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group. PMID:26207985

  15. Evaluation of the Quantitative Accuracy of 3D Reconstruction of Edentulous Jaw Models with Jaw Relation Based on Reference Point System Alignment

    PubMed Central

    Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133

  16. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  17. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  18. A high accuracy gyroscope readout test facility for the relativity gyroscope experiment

    NASA Technical Reports Server (NTRS)

    Cabrera, B.; Van Kann, F. J.

    1977-01-01

    An apparatus is under construction for ground-based testing of a gyroscope system to be used in a satellite test of general relativity. The immediate goal is a readout capable of measuring the direction of the gyroscope spin axis to an angular resolution of one arcsecond over a limited range. A combination of SQUID magnetometers and persistent current loops are used to measure the London moment of the spinning superconducting rotor levitated electrostatically. To obtain a trapped flux signal in the gyroscope sufficiently smaller than the London moment signal, the apparatus makes use of a new magnetic field shielding technique for obtaining large superconductor shielded regions below 0.1 millionth of a Gauss.

  19. Modulation of feedback-related negativity during trial-and-error exploration and encoding of behavioral shifts

    PubMed Central

    Sallet, Jérôme; Camille, Nathalie; Procyk, Emmanuel

    2013-01-01

    The feedback-related negativity (FRN) is a mid-frontal event-related potential (ERP) recorded in various cognitive tasks and associated with the onset of sensory feedback signaling decision outcome. Some properties of the FRN are still debated, notably its sensitivity to positive and negative reward prediction error (RPE)—i.e., the discrepancy between the expectation and the actual occurrence of a particular feedback,—and its role in triggering the post-feedback adjustment. In the present study we tested whether the FRN is modulated by both positive and negative RPE. We also tested whether an instruction cue indicating the need for behavioral adjustment elicited the FRN. We asked 12 human subjects to perform a problem-solving task where they had to search by trial and error which of five visual targets, presented on a screen, was associated with a correct feedback. After exploration and discovery of the correct target, subjects could repeat their correct choice until the onset of a visual signal to change (SC) indicative of a new search. Analyses showed that the FRN was modulated by both negative and positive prediction error (RPE). Finally, we found that the SC elicited an FRN-like potential on the frontal midline electrodes that was not modulated by the probability of that event. Collectively, these results suggest the FRN may reflect a mechanism that evaluates any event (outcome, instruction cue) signaling the need to engage adaptive actions. PMID:24294190

  20. Stability and accuracy of relative scale factor estimates for Superconducting Gravimeters

    NASA Astrophysics Data System (ADS)

    Wziontek, H.; Cordoba, B.; Crossley, D.; Wilmes, H.; Wolf, P.; Serna, J. M.; Warburton, R.

    2012-04-01

    Superconducting gravimeters (SG) are known to be the most sensitive and most stable gravimeters. However, reliably determining the scale factor calibration and its stability with the required precision of better than 0.1% is still an open issue. The relative comparison of temporal gravity variations due to the Earths tides recorded with other calibrated gravimeters is one method to obtain the SG scale factor. Usually absolute gravimeters (AG) are used for such a comparison and the stability of the scale factor can be deduced by repeated observations over a limited period, or by comparison with precise tidal models. In recent work it was shown that spring gravimeters may not be stable enough to transfer the calibration between SG. A promising alternative is to transfer the scale factor with a well calibrated, moveable SG. To assess the perspectives of such an approach, the coherence of records from dual sphere SGs and two SGs which are being operated side by side at the stations Bad Homburg and Wettzell (Germany) and other GGP sites is analysed. To determine and remove the instrumental drift, a reference time series from the combination with AG measurements is used. The reproducibility of the scale factor and the achievable precision are investigated for comparison periods of different lenght and conclusions are drawn to the use of AG and the future application of the moveable iGrav™ SG.

  1. Modeling and calibration of pointing errors with alt-az telescope

    NASA Astrophysics Data System (ADS)

    Huang, Long; Ma, Wenli; Huang, Jinlong

    2016-08-01

    This paper presents a new model for improving the pointing accuracy of a telescope. The Denavit-Hartenberg (D-H) convention was used to perform an error analysis of the telescope's kinematics. A kinematic model was used to relate pointing errors to mechanical errors and the parameters of the kinematic model were estimated with a statistical model fit using data from two large astronomical telescopes. The model illustrates the geometric errors caused by imprecision in manufacturing and assembly processes and their effects on the pointing accuracy of the telescope. A kinematic model relates pointing error to axis position when certain geometric errors are assumed to be present in a telescope. In the parameter estimation portion, the semi-parametric regression model was introduced to compensate for remaining nonlinear errors. The experimental results indicate that the proposed semi-parametric regression model eliminates both geometric and nonlinear errors, and that the telescope's pointing accuracy significantly improves after this calibration.

  2. The Influence of Relatives on the Efficiency and Error Rate of Familial Searching

    PubMed Central

    Rohlfs, Rori V.; Murphy, Erin; Song, Yun S.; Slatkin, Montgomery

    2013-01-01

    We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability () of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases. PMID:23967076

  3. Human subthalamic nucleus-medial frontal cortex theta phase coherence is involved in conflict and error related cortical monitoring.

    PubMed

    Zavala, Baltazar; Tan, Huiling; Ashkan, Keyoumars; Foltynie, Thomas; Limousin, Patricia; Zrinzo, Ludvic; Zaghloul, Kareem; Brown, Peter

    2016-08-15

    The medial prefrontal cortex (mPFC) is thought to control the shift from automatic to controlled action selection when conflict is present or when mistakes have been recently committed. Growing evidence suggests that this process involves frequency specific communication in the theta (4-8Hz) band between the mPFC and the subthalamic nucleus (STN), which is the main target of deep brain stimulation (DBS) for Parkinson's disease. Key in this hypothesis is the finding that DBS can lead to impulsivity by disrupting the correlation between higher mPFC oscillations and slower reaction times during conflict. In order to test whether theta band coherence between the mPFC and the STN underlies adjustments to conflict and to errors, we simultaneously recorded mPFC and STN electrophysiological activity while DBS patients performed an arrowed flanker task. These recordings revealed higher theta phase coherence between the two sites during the high conflict trials relative to the low conflict trials. These differences were observed soon after conflicting arrows were displayed, but before a response was executed. Furthermore, trials that occurred after an error was committed showed higher phase coherence relative to trials that followed a correct trial, suggesting that mPFC-STN connectivity may also play a role in error related adjustments in behavior. Interestingly, the phase coherence we observed occurred before increases in theta power, implying that the theta phase and power may influence behavior at separate times during cortical monitoring. Finally, we showed that pre-stimulus differences in STN theta power were related to the reaction time on a given trial, which may help adjust behavior based on the probability of observing conflict during a task. PMID:27181763

  4. Characterization and mitigation of relative edge placement errors (rEPE) in full-chip computational lithography

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Gupta, Rachit; Shang, Shumay; Liubich, Vlad; Word, James

    2015-10-01

    Edge placement error (EPE) was a term initially introduced to describe the difference between predicted pattern contour edge and the design target. Strictly speaking this quantity is not directly measurable in the fab, and furthermore it is not ultimately the most important metric for chip yield. What is of vital importance is the relative EPE (rEPE) between different design layers, and in the era of multi-patterning, the different constituent mask sublayers for a single design layer. There has always been a strong emphasis on measurement and control of misalignment between design layers, and the progress in this realm has been remarkable, spurned in part at least by the proliferation of multi-patterning which reduces the available overlay budget by introducing a coupling of alignment and CD errors for the target layer. In-line CD and overlay metrology specifications are typically established by starting with design rules and making certain assumptions about error distributions which might be encountered in manufacturing. Lot disposition criteria in photo metrology (rework or pass to etch) are set assuming worst case assumptions for CD and overlay respectively. For example poly to active overlay specs start with poly endcap design rules and make assumptions about active and poly lot average and across lot CDs, and incorporate general knowledge about poly line end rounding to ensure that leakage current is maintained within specification. This worst case guard banding does not consider specific chip designs, however and as we have previously shown full-chip simulation can elucidate the most critical "hot spots" for interlayer process variability comprehending the two-layer CD and misalignment process window. It was shown that there can be differences in X versus Y misalignment process windows as well as positive versus negative directional misalignment process windows and that such design specific information might be leveraged for manufacturing disposition and

  5. Dynamics of combined initial-condition and model-related errors in a Quasi-Geostrophic prediction system

    NASA Astrophysics Data System (ADS)

    Perdigão, R. A. P.; Pires, C. A. L.; Vannitsem, S.

    2009-04-01

    Atmospheric prediction systems are known to suffer from fundamental uncertainties associated with their sensitivity to the initial conditions and with the inaccuracy in the model representation. A formulation for the error dynamics taking into account both these factors and intrinsic properties of the system has been developed in a study by Nicolis, Perdigao and Vannitsem (2008, in press). In the present study that study is generalized to systems of higher complexity. The extended approach admits systems with non-Euclidean metrics, multivariate perturbations, correlated and anisotropic initial errors, including error sources stemming from the data assimilation process. As in the low-order case, the formulation admits small perturbations relative to the attractor of the underlying dynamics and respective parameters, and contemplates the short to intermediate time regime. The underlying system is assumed to be governed by non-linear evolution laws with continuous derivatives, where the variables representing the unperturbed and perturbed models span the same manifold defined by a phase space with the same topological dimension. As a core ilustrative case a three-level Quasi-Geostrophic system with triangular truncation T21 is considered. While some generic features are identified that come in agreement with those seen in lower-order systems, further properties of physical relevance, stemming from the generalizations, are also unveiled.

  6. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  7. Awareness of Memory Ability and Change: (In)Accuracy of Memory Self-Assessments in Relation to Performance

    PubMed Central

    Rickenbach, Elizabeth Hahn; Agrigoroaei, Stefan; Lachman, Margie E.

    2015-01-01

    Little is known about subjective assessments of memory abilities and decline among middle-aged adults or their association with objective memory performance in the general population. In this study we examined self-ratings of memory ability and change in relation to episodic memory performance in two national samples of middle-aged and older adults from the Midlife in the United States study (MIDUS II in 2005-06) and the Health and Retirement Study (HRS; every two years from 2002 to 2012). MIDUS (Study 1) participants (N=3,581) rated their memory compared to others their age and to themselves five years ago; HRS (Study 2) participants (N=14,821) rated their current memory and their memory compared to two years ago, with up to six occasions of longitudinal data over ten years. In both studies, episodic memory performance was the total number of words recalled in immediate and delayed conditions. When controlling for demographic and health correlates, self-ratings of memory abilities, but not subjective change, were related to performance. We examined accuracy by comparing subjective and objective memory ability and change. More than one third of the participants across the studies had self-assessments that were inaccurate relative to their actual level of performance and change, and accuracy differed as a function of demographic and health factors. Further understanding of self-awareness of memory abilities and change beginning in midlife may be useful for identifying early warning signs of decline, with implications regarding policies and practice for early detection and treatment of cognitive impairment. PMID:25821529

  8. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    SciTech Connect

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)

  9. Bottom-Up Mechanisms Are Involved in the Relation between Accuracy in Timing Tasks and Intelligence--Further Evidence Using Manipulations of State Motivation

    ERIC Educational Resources Information Center

    Ullen, Fredrik; Soderlund, Therese; Kaaria, Lenita; Madison, Guy

    2012-01-01

    Intelligence correlates with accuracy in various timing tasks. Such correlations could be due to both bottom-up mechanisms, e.g. neural properties that influence both temporal accuracy and cognitive processing, and differences in top-down control. We have investigated the timing-intelligence relation using a simple temporal motor task, isochronous…

  10. An Event-Related Potential Study on Changes of Violation and Error Responses during Morphosyntactic Learning

    ERIC Educational Resources Information Center

    Davidson, Douglas J.; Indefrey, Peter

    2009-01-01

    Based on recent findings showing electrophysiological changes in adult language learners after relatively short periods of training, we hypothesized that adult Dutch learners of German would show responses to German gender and adjective declension violations after brief instruction. Adjective declension in German differs from previously studied…

  11. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity.

    PubMed

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing "to buy" or "not to buy," participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the "not to buy" stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process. PMID:26557139

  12. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    PubMed Central

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process. PMID:26557139

  13. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach. PMID:21394601

  14. Increased Error-Related Brain Activity Distinguishes Generalized Anxiety Disorder With and Without Comorbid Major Depressive Disorder

    PubMed Central

    Weinberg, Anna; Klein, Daniel N.; Hajcak, Greg

    2013-01-01

    Generalized anxiety disorder (GAD) and major depressive disorder (MDD) are so frequently comorbid that some have suggested that the 2 should be collapsed into a single overarching “distress” disorder. Yet there is also increasing evidence that the 2 categories are not redundant. Neurobehavioral markers that differentiate GAD and MDD would be helpful in ongoing efforts to refine classification schemes based on neurobiological measures. The error-related negativity (ERN) may be one such marker. The ERN is an event-related potential component presenting as a negative deflection approximately 50 ms following an erroneous response and reflects activity of the anterior cingulate cortex. There is evidence for an enhanced ERN in individuals with GAD, but the literature in MDD is mixed. The present study measured the ERN in 26 GAD, 23 comorbid GAD and MDD, and 36 control participants, all of whom were female and medication-free. Consistent with previous research, the GAD group was characterized by a larger ERN and an increased difference between error and correct trials than controls. No such enhancement was evident in the comorbid group, suggesting comorbid depression may moderate the relationship between the ERN and anxiety. The present study further suggests that the ERN is a potentially useful neurobiological marker for future studies that consider the pathophysiology of multiple disorders in order to construct or refine neurobiologically based diagnostic phenotypes. PMID:22564180

  15. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  16. Contouring error compensation on a micro coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Fan, Kuang-Chao; Wang, Hung-Yu; Ye, Jyun-Kuan

    2011-12-01

    In recent years, three-dimensional measurements of nano-technology researches have received a great attention in the world. Based on the high accuracy demand, the error compensation of measurement machine is very important. In this study, a high precision Micro-CMM (coordinate measuring machine) has been developed which is composed of a coplanar stage for reducing the Abbé error in the vertical direction, the linear diffraction grating interferometer (LDGI) as the position feedback sensor in nanometer resolution, and ultrasonic motors for position control. This paper presents the error compensation strategy including "Home accuracy" and "Position accuracy" in both axes. For the home error compensation, we utilize a commercial DVD pick-up head and its S-curve principle to accurately search the origin of each axis. For the positioning error compensation, the absolute positions relative to the home are calibrated by laser interferometer and the error budget table is stored for feed forward error compensation. Contouring error can thus be compensated if both the compensation of both X and Y positioning errors are applied. Experiments show the contouring accuracy can be controlled to within 50nm after compensation.

  17. Relative, label-free protein quantitation: spectral counting error statistics from nine replicate MudPIT samples.

    PubMed

    Cooper, Bret; Feng, Jian; Garrett, Wesley M

    2010-09-01

    Nine replicate samples of peptides from soybean leaves, each spiked with a different concentration of bovine apotransferrin peptides, were analyzed on a mass spectrometer using multidimensional protein identification technology (MudPIT). Proteins were detected from the peptide tandem mass spectra, and the numbers of spectra were statistically evaluated for variation between samples. The results corroborate prior knowledge that combining spectra from replicate samples increases the number of identifiable proteins and that a summed spectral count for a protein increases linearly with increasing molar amounts of protein. Furthermore, statistical analysis of spectral counts for proteins in two- and three-way comparisons between replicates and combined replicates revealed little significant variation arising from run-to-run differences or data-dependent instrument ion sampling that might falsely suggest differential protein accumulation. In these experiments, spectral counting was enabled by PANORAMICS, probability-based software that predicts proteins detected by sets of observed peptides. Three alternative approaches to counting spectra were also evaluated by comparison. As the counting thresholds were changed from weaker to more stringent, the accuracy of ratio determination also changed. These results suggest that thresholds for counting can be empirically set to improve relative quantitation. All together, the data confirm the accuracy and reliability of label-free spectral counting in the relative, quantitative analysis of proteins between samples. PMID:20541435

  18. Exposure Error Masks The Relationship Between Traffic-Related Air Pollution and Heart Rate Variability (HRV)

    PubMed Central

    Suh, Helen H.; Zanobetti, Antonella

    2010-01-01

    Objective We examined whether more precise exposure measures would better detect associations between traffic-related pollution, elemental carbon (EC) and nitrogen dioxide (NO2), and HRV. Methods Repeated 24-h personal and ambient PM2.5, EC, and NO2 were measured for 30 people living in Atlanta, GA. The association between HRV and either ambient concentrations or personal exposures was examined using linear mixed effects models. Results Ambient PM2.5, EC, and NO2 and personal PM2.5 were not associated with HRV. Personal EC and NO2 measured 24-h prior to HRV was associated with decreased rMSSD, PNN50, and HF and with increased LF/HF. RMSSD decreased by 10.97% (95% CI: -18.00,-3.34) for an IQR change in personal EC (0.81 ug/m3). Conclusions Results indicate decreased vagal tone in response to traffic pollutants, which can best be detected with precise personal exposure measures. PMID:20595912

  19. Effects of Exposure Measurement Error in the Analysis of Health Effects from Traffic-Related Air Pollution

    PubMed Central

    Baxter, Lisa K.; Wright, Rosalind J.; Paciorek, Christopher J.; Laden, Francine; Suh, Helen H.; Levy, Jonathan I.

    2011-01-01

    In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to evaluate quantitatively these surrogates against measured pollutant concentrations to determine how their use affects the interpretation of epidemiological study results. In this study, we quantified the implications of using exposure models derived from validation studies, and other alternative surrogate models with varying amounts of measurement error, on epidemiological study findings. We compared previously developed multiple regression models characterizing residential indoor nitrogen dioxide (NO2), fine particulate matter (PM2.5), and elemental carbon (EC) concentrations to models with less explanatory power that may be applied in the absence of validation studies. We constructed a hypothetical epidemiological study, under a range of odds ratios, and determined the bias and uncertainty caused by the use of various exposure models predicting residential indoor exposure levels. Our simulations illustrated that exposure models with fairly modest R2 (0.3 to 0.4 for the previously developed multiple regression models for PM2.5 and NO2) yielded substantial improvements in epidemiological study performance, relative to the application of regression models created in the absence of validation studies or poorer-performing validation study models (e.g. EC). In many studies, models based on validation data may not be possible, so it may be necessary to use a surrogate model with more measurement error. This analysis provides a technique to quantify the implications of applying various exposure models with different degrees of measurement error in epidemiological research. PMID:19223939

  20. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data

    PubMed Central

    Lowther, Andrew D.; Lydersen, Christian; Fedak, Mike A.; Lovell, Phil; Kovacs, Kit M.

    2015-01-01

    Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25km with some producing RMSE of less than 2.50km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution. PMID:25905640

  1. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    PubMed

    Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M

    2015-01-01

    Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution. PMID:25905640

  2. Refractive Error and Risk of Early or Late Age-Related Macular Degeneration: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Ying; Wang, JiWen; Zhong, XiaoJing; Tian, Zhen; Wu, Peipei; Zhao, Wenbo; Jin, Chenjin

    2014-01-01

    Objective To summarize relevant evidence investigating the associations between refractive error and age-related macular degeneration (AMD). Design Systematic review and meta-analysis. Methods We searched Medline, Web of Science, and Cochrane databases as well as the reference lists of retrieved articles to identify studies that met the inclusion criteria. Extracted data were combined using a random-effects meta-analysis. Studies that were pertinent to our topic but did not meet the criteria for quantitative analysis were reported in a systematic review instead. Main outcome measures Pooled odds ratios (ORs) and 95% confidence intervals (CIs) for the associations between refractive error (hyperopia, myopia, per-diopter increase in spherical equivalent [SE] toward hyperopia, per-millimeter increase in axial length [AL]) and AMD (early and late, prevalent and incident). Results Fourteen studies comprising over 5800 patients were eligible. Significant associations were found between hyperopia, myopia, per-diopter increase in SE, per-millimeter increase in AL, and prevalent early AMD. The pooled ORs and 95% CIs were 1.13 (1.06–1.20), 0.75 (0.56–0.94), 1.10 (1.07–1.14), and 0.79 (0.73–0.85), respectively. The per-diopter increase in SE was also significantly associated with early AMD incidence (OR, 1.06; 95% CI, 1.02–1.10). However, no significant association was found between hyperopia or myopia and early AMD incidence. Furthermore, neither prevalent nor incident late AMD was associated with refractive error. Considerable heterogeneity was found among studies investigating the association between myopia and prevalent early AMD (P = 0.001, I2 = 72.2%). Geographic location might play a role; the heterogeneity became non-significant after stratifying these studies into Asian and non-Asian subgroups. Conclusion Refractive error is associated with early AMD but not with late AMD. More large-scale longitudinal studies are needed to further investigate such

  3. [Determination of relative error of pressure-broadening linewidth for the experimentally indistinguishable overlapped spectral lines with Voigt profile].

    PubMed

    Lin, Jie-Li; Huang, Yi-Qing; Lu, Hong

    2005-01-01

    The simulation and fitting of the overlapped spectral lines with Voigt profile were presented in this paper. The relative errors epsilon of the fitted pressure-broadening linewidth when taking the overlapped spectral line as one spectrum were discussed in detail. The relationship between such error and the two spectral lines center distance deltav0, and theoretical pressure-broadening linewidth deltav(L)0 were analyzed. Epsilon is found to be very large and the relationship between epsilon and deltav0, deltav(L)0 is very complicated when the value of pressure-broadening linewidth is considerably less than that of Dopplerian one deltavD. When deltav(L)0 is comparative to deltaVD the relationship between epsilon and deltav0 is close to the smooth two-order polynomial curve. However, the slop of this curve is negative while deltav(L)0 is smaller than deltavD and is positive when larger. Generally, epsilon decreases with the increase of proportion of deltav(l)0 to the whole spectral linewidth. All the above conclusion and corresponding data are the significant reference to determine the precise pressure-broadening coefficient from the experimentally indistinguishable overlapped spectrum, as well as to correct the fitted pressure-broadening linewidth. PMID:15852837

  4. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  5. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  6. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials.

    PubMed

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  7. [Longer working hours of pharmacists in the ward resulted in lower medication-related errors--survey of national university hospitals in Japan].

    PubMed

    Matsubara, Kazuo; Toyama, Akira; Satoh, Hiroshi; Suzuki, Hiroshi; Awaya, Toshio; Tasaki, Yoshikazu; Yasuoka, Toshiaki; Horiuchi, Ryuya

    2011-04-01

    It is obvious that pharmacists play a critical role as risk managers in the healthcare system, especially in medication treatment. Hitherto, there is not a single multicenter-survey report describing the effectiveness of clinical pharmacists in preventing medical errors from occurring in the wards in Japan. Thus, we conducted a 1-month survey to elucidate the relationship between the number of errors and working hours of pharmacists in the ward, and verified whether the assignment of clinical pharmacists to the ward would prevent medical errors between October 1-31, 2009. Questionnaire items for the pharmacists at 42 national university hospitals and a medical institute included the total and the respective numbers of medication-related errors, beds and working hours of pharmacist in 2 internal medicine and 2 surgical departments in each hospital. Regardless of severity, errors were consecutively reported to the Medical Security and Safety Management Section in each hospital. The analysis of errors revealed that longer working hours of pharmacists in the ward resulted in less medication-related errors; this was especially significant in the internal medicine ward (where a variety of drugs were used) compared with the surgical ward. However, the nurse assignment mode (nurse/inpatients ratio: 1 : 7-10) did not influence the error frequency. The results of this survey strongly indicate that assignment of clinical pharmacists to the ward is critically essential in promoting medication safety and efficacy. PMID:21467804

  8. High Accuracy of Karplus Equations for Relating Three-Bond J Couplings to Protein Backbone Torsion Angles

    PubMed Central

    Li, Fang; Lee, Jung Ho; Grishaev, Alexander; Ying, Jinfa; Bax, Ad

    2015-01-01

    3JC′C′ and 3JHNHα couplings are related to the intervening backbone torsion angle ϕ by standard Karplus equations. Although these couplings are known to be affected by parameters other than ϕ, including H-bonding, valence angles and residue type, experimental results and quantum calculations indicate that the impact of these latter parameters is typically very small. The solution NMR structure of protein GB3, newly refined by using extensive sets of residual dipolar couplings (RDCs), yields 50–60% better Karplus equation agreement between ϕ angles and experimental 3JC′C′ and 3JHNHα values than does the high resolution X-ray structure. In intrinsically disordered proteins, 3JC′C′ and 3JHNHα couplings can be measured at even higher accuracy, and the impact of factors other than the intervening torsion angle on 3J will be smaller than in folded proteins, making these couplings exceptionally valuable reporters on the ensemble of ϕ angles sampled by each residue. PMID:25511552

  9. Relative accuracy testing of an X-ray fluorescence-based mercury monitor at coal-fired boilers.

    PubMed

    Hay, K James; Johnsen, Bruce E; Ginochio, Paul R; Cooper, John A

    2006-05-01

    The relative accuracy (RA) of a newly developed mercury continuous emissions monitor, based on X-ray fluorescence, was determined by comparing analysis results at coal-fired plants with two certified reference methods (American Society for Testing and Materials [ASTM] Method D6784-02 and U.S. Environment Protection Agency [EPA] Method 29). During the first determination, the monitor had an RA of 25% compared with ASTM Method D6784-02 (Ontario Hydro Method). However, the Ontario Hydro Method performed poorly, because the mercury concentrations were near the detection limit of the reference method. The mercury in this exhaust stream was primarily elemental. The second test was performed at a U.S. Army boiler against EPA Reference Method 29. Mercury and arsenic were spiked because of expected low mercury concentrations. The monitor had an RA of 16% for arsenic and 17% for mercury, meeting RA requirements of EPA Performance Specification 12a. The results suggest that the sampling stream contained significant percentages of both elemental and oxidized mercury. The monitor was successful at measuring total mercury in particulate and vapor forms. PMID:16739803

  10. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  11. Achieving Seventh-Order Amplitude Accuracy in Leapfrog Integrations

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2014-12-01

    The leapfrog time-stepping scheme is commonly used in general circulation models of the atmosphere and ocean. The Robert-Asselin filter is used in conjunction with it, to damp the computational mode. Although the leapfrog scheme makes no amplitude errors when integrating linear oscillations, the Robert-Asselin filter introduces first-order amplitude errors. The RAW filter, which was recently proposed as an improvement, eliminates the first-order amplitude errors and yields third-order amplitude accuracy. This development has been shown to significantly increase the skill of medium-range weather forecasts. However, it has not previously been shown how to further improve the accuracy by eliminating the third- and higher-order amplitude errors. This presentation will show that leapfrogging over a suitably weighted blend of the filtered and unfiltered tendencies eliminates the third-order amplitude errors and yields fifth-order amplitude accuracy. It will also show that the use of a more discriminating (1, -4, 6, -4, 1) filter instead of a (1, -2, 1) filter eliminates the fifth-order amplitude errors and yields seventh-order amplitude accuracy. Other related schemes are obtained by varying the values of the filter parameters, and it is found that several combinations offer an appealing compromise of stability and accuracy. The proposed new schemes are shown to yield substantial forecast improvements in a medium-complexity atmospheric general circulation model. They appear to be attractive alternatives to the filtered leapfrog schemes currently used in many weather and climate models.

  12. Teaching Picture-to-Object Relations in Picture-Based Requesting by Children with Autism: A Comparison between Error Prevention and Error Correction Teaching Procedures

    ERIC Educational Resources Information Center

    Carr, D.; Felce, J.

    2008-01-01

    Background: Children who have a combination of language and developmental disabilities with autism often experience major difficulties in learning relations between objects and their graphic representations. Therefore, they would benefit from teaching procedures that minimize their difficulties in acquiring these relations. This study compared two…

  13. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  14. Assessment of targeting accuracy of a low-energy stereotactic radiosurgery treatment for age-related macular degeneration

    NASA Astrophysics Data System (ADS)

    Taddei, Phillip J.; Chell, Erik; Hansen, Steven; Gertner, Michael; Newhauser, Wayne D.

    2010-12-01

    Age-related macular degeneration (AMD), a leading cause of blindness in the United States, is a neovascular disease that may be controlled with radiation therapy. Early patient outcomes of external beam radiotherapy, however, have been mixed. Recently, a novel multimodality treatment was developed, comprising external beam radiotherapy and concomitant treatment with a vascular endothelial growth factor inhibitor. The radiotherapy arm is performed by stereotactic radiosurgery, delivering a 16 Gy dose in the macula (clinical target volume, CTV) using three external low-energy x-ray fields while adequately sparing normal tissues. The purpose of our study was to test the sensitivity of the delivery of the prescribed dose in the CTV using this technique and of the adequate sparing of normal tissues to all plausible variations in the position and gaze angle of the eye. Using Monte Carlo simulations of a 16 Gy treatment, we varied the gaze angle by ±5° in the polar and azimuthal directions, the linear displacement of the eye ±1 mm in all orthogonal directions, and observed the union of the three fields on the posterior wall of spheres concentric with the eye that had diameters between 20 and 28 mm. In all cases, the dose in the CTV fluctuated <6%, the maximum dose in the sclera was <20 Gy, the dose in the optic disc, optic nerve, lens and cornea were <0.7 Gy and the three-field junction was adequately preserved. The results of this study provide strong evidence that for plausible variations in the position of the eye during treatment, either by the setup error or intrafraction motion, the prescribed dose will be delivered to the CTV and the dose in structures at risk will be kept far below tolerance doses.

  15. Compensation Low-Frequency Errors in TH-1 Satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin

    2016-06-01

    The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.

  16. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  17. Error detection and response adjustment in youth with mild spastic cerebral palsy: an event-related brain potential study.

    PubMed

    Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J

    2013-06-01

    This study evaluated the brain activation state during error making in youth with mild spastic cerebral palsy and a peer control group while carrying out a stimulus recognition task. The key question was whether patients were detecting their own errors and subsequently improving their performance in a future trial. Findings indicated that error responses of the group with cerebral palsy were associated with weak motor preparation, as indexed by the amplitude of the late contingent negative variation. However, patients were detecting their errors as indexed by the amplitude of the response-locked negativity and thus improved their performance in a future trial. Findings suggest that the consequence of error making on future performance is intact in a sample of youth with mild spastic cerebral palsy. Because the study group is small, the present findings need replication using a larger sample. PMID:22899795

  18. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  19. Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun

    2016-07-01

    The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.

  20. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  1. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  2. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  3. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  4. Gravitational model effects on ICBM accuracy

    NASA Astrophysics Data System (ADS)

    Ford, C. T.

    This paper describes methods used to assess the contribution of ICBM gravitational model errors to targeting accuracy. The evolution of gravitational model complexity, in both format and data base development, is summarized. Error analysis methods associated with six identified error sources are presented: geodetic coordinate errors; spherical harmonic potential function errors of commission and omission; and surface gravity anomaly errors of reduction, representation, and omission.

  5. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  6. Individual differences in reward prediction error: contrasting relations between feedback-related negativity and trait measures of reward sensitivity, impulsivity and extraversion

    PubMed Central

    Cooper, Andrew J.; Duke, Éilish; Pickering, Alan D.; Smillie, Luke D.

    2014-01-01

    Medial-frontal negativity occurring ∼200–300 ms post-stimulus in response to motivationally salient stimuli, usually referred to as feedback-related negativity (FRN), appears to be at least partly modulated by dopaminergic-based reward prediction error (RPE) signaling. Previous research (e.g., Smillie et al., 2011) has shown that higher scores on a putatively dopaminergic-based personality trait, extraversion, were associated with a more pronounced difference wave contrasting unpredicted non-reward and unpredicted reward trials on an associative learning task. In the current study, we sought to extend this research by comparing how trait measures of reward sensitivity, impulsivity and extraversion related to the FRN using the same associative learning task. A sample of healthy adults (N = 38) completed a battery of personality questionnaires, before completing the associative learning task while EEG was recorded. As expected, FRN was most negative following unpredicted non-reward. A difference wave contrasting unpredicted non-reward and unpredicted reward trials was calculated. Extraversion, but not measures of impulsivity, had a significant association with this difference wave. Further, the difference wave was significantly related to a measure of anticipatory pleasure, but not consummatory pleasure. These findings provide support for the existing evidence suggesting that variation in dopaminergic functioning in brain “reward” pathways may partially underpin associations between the FRN and trait measures of extraversion and anticipatory pleasure. PMID:24808845

  7. Effects of the ephemeris error on effective pointing for a spaceborne SAR

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1989-01-01

    Both Magellan SAR data acquisition and image processing require the knowledge of both the ephemeris (spacecraft position and velocity) and the radar pointing direction. Error in the knowledge of the radar pointing direction results in a loss of SNR in the image product. An error in the ephemeris data has a similar effect. To facilitate SNR performance analysis, an effective pointing error is defined to characterize the effect of the ephemeris error. A systematic approach to relate the ephemeris error to the effective pointing errors is described. Result of this analysis has led to a formal accuracy requirement levied on the Magellan navigation system.

  8. Digital reader vs print media: the role of digital technology in reading accuracy in age-related macular degeneration

    PubMed Central

    Gill, K; Mao, A; Powell, A M; Sheidow, T

    2013-01-01

    Purpose To compare patient satisfaction, reading accuracy, and reading speed between digital e-readers (Sony eReader, Apple iPad) and standard paper/print media for patients with stable wet age-related macular degeneration (AMD). Methods Patients recruited for the study were patients with stable wet AMD, in one or both eyes, who would benefit from a low-vision aid. The selected text sizes by patients reflected the spectrum of low vision in regard to their macular disease. Stability of macular degeneration was assessed on a clinical examination with stable visual acuity. Patients recruited for the study were assessed for reading speeds on both digital readers and standard paper text. Standardized and validated texts for reading speeds were used. Font sizes in the study reflected a spectrum from newsprint to large print books. Patients started with the smallest print size they could read on the standardized paper text. They then used digital readers to read the same size standardized text. Reading speed was calculated as words per minute by the formula (correctly read words/reading time (s)·60). The visual analog scale was completed by patients after reading each passage. These included their assessment on ‘ease of use' and ‘clarity of print' for each device and the print paper. Results A total of 27 patients were used in the study. Patients consistently read faster (P<0.0003) on the Apple iPad with larger text sizes (size 24 or greater) when compared with paper, and also on the paper compared with the Sony eReader (P<0.03) in all text group sizes. Patients chose the iPad to have the best clarity and the print paper as the easiest to use. Conclusions This study has demonstrated that digital devices may have a use in visual rehabilitation for low-vision patients. Devices that have larger display screens and offer high contrast ratios will benefit AMD patients who require larger texts to read. PMID:23492860

  9. Correcting for bias in relative risk estimates due to exposure measurement error: a case study of occupational exposure to antineoplastics in pharmacists.

    PubMed Central

    Spiegelman, D; Valanis, B

    1998-01-01

    OBJECTIVES: This paper describes 2 statistical methods designed to correct for bias from exposure measurement error in point and interval estimates of relative risk. METHODS: The first method takes the usual point and interval estimates of the log relative risk obtained from logistic regression and corrects them for nondifferential measurement error using an exposure measurement error model estimated from validation data. The second, likelihood-based method fits an arbitrary measurement error model suitable for the data at hand and then derives the model for the outcome of interest. RESULTS: Data from Valanis and colleagues' study of the health effects of antineoplastics exposure among hospital pharmacists were used to estimate the prevalence ratio of fever in the previous 3 months from this exposure. For an interdecile increase in weekly number of drugs mixed, the prevalence ratio, adjusted for confounding, changed from 1.06 to 1.17 (95% confidence interval [CI] = 1.04, 1.26) after correction for exposure measurement error. CONCLUSIONS: Exposure measurement error is often an important source of bias in public health research. Methods are available to correct such biases. PMID:9518972

  10. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    ERIC Educational Resources Information Center

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The sample consisted…

  11. The Episodic Engram Transformed: Time Reduces Retrieval-Related Brain Activity but Correlates It with Memory Accuracy

    ERIC Educational Resources Information Center

    Furman, Orit; Mendelsohn, Avi; Dudai, Yadin

    2012-01-01

    We took snapshots of human brain activity with fMRI during retrieval of realistic episodic memory over several months. Three groups of participants were scanned during a memory test either hours, weeks, or months after viewing a documentary movie. High recognition accuracy after hours decreased after weeks and remained at similar levels after…

  12. Single-plane versus three-plane methods for relative range error evaluation of medium-range 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Cournoyer, Luc; Beraldin, J.-Angelo

    2015-05-01

    Within the context of the ASTM E57 working group WK12373, we compare the two methods that had been initially proposed for calculating the relative range error of medium-range (2 m to 150 m) optical non-contact 3D imaging systems: the first is based on a single plane (single-plane assembly) and the second on an assembly of three mutually non-orthogonal planes (three-plane assembly). Both methods are evaluated for their utility in generating a metric to quantify the relative range error of medium-range optical non-contact 3D imaging systems. We conclude that the three-plane assembly is comparable to the single-plane assembly with regard to quantification of relative range error while eliminating the requirement to isolate the edges of the target plate face.

  13. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  14. The relative and absolute timing accuracy of the EPIC-pn camera on XMM-Newton, from X-ray pulsations of the Crab and other pulsars

    NASA Astrophysics Data System (ADS)

    Martin-Carrillo, A.; Kirsch, M. G. F.; Caballero, I.; Freyberg, M. J.; Ibarra, A.; Kendziorra, E.; Lammers, U.; Mukerjee, K.; Schönherr, G.; Stuhlinger, M.; Saxton, R. D.; Staubert, R.; Suchy, S.; Wellbrock, A.; Webb, N.; Guainazzi, M.

    2012-09-01

    Aims: Reliable timing calibration is essential for the accurate comparison of XMM-Newton light curves with those from other observatories, to ultimately use them to derive precise physical quantities. The XMM-Newton timing calibration is based on pulsar analysis. However, because pulsars show both timing noise and glitches, it is essential to monitor these calibration sources regularly. To this end, the XMM-Newton observatory performs observations twice a year of the Crab pulsar to monitor the absolute timing accuracy of the EPIC-pn camera in the fast timing and burst modes. We present the results of this monitoring campaign, comparing XMM-Newton data from the Crab pulsar (PSR B0531+21) with radio measurements. In addition, we use five pulsars (PSR J0537-69, PSR B0540-69, PSR B0833-45, PSR B1509-58, and PSR B1055-52) with periods ranging from 16 ms to 197 ms to verify the relative timing accuracy. Methods: We analysed 38 XMM-Newton observations (0.2-12.0 keV) of the Crab taken over the first ten years of the mission and 13 observations from the five complementary pulsars. All data were processed with SAS, the XMM-Newton Scientific Analysis Software, version 9.0. Epoch-folding techniques coupled with χ2 tests were used to derive relative timing accuracies. The absolute timing accuracy was determined using the Crab data and comparing the time shift between the main X-ray and radio peaks in the phase-folded light curves. Results: The relative timing accuracy of XMM-Newton is found to be better than 10-8. The strongest X-ray pulse peak precedes the corresponding radio peak by 306 ± 9 μs, which agrees with other high-energy observatories such as Chandra, INTEGRAL and RXTE. The derived absolute timing accuracy from our analysis is ± 48 μs.

  15. Effect of ephemeris errors on the accuracy of the computation of the tangent point altitude of a solar scanning ray as measured by the SAGE 1 and 2 instruments

    NASA Technical Reports Server (NTRS)

    Buglia, James J.

    1989-01-01

    An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.

  16. Accuracy assessment in the Large Area Crop Inventory Experiment

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.

    1979-01-01

    The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.

  17. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  18. Relating Regime Structure to Probability Distribution and Preferred Structure of Small Errors in a Large Atmospheric GCM

    NASA Astrophysics Data System (ADS)

    Straus, D. M.

    2007-12-01

    The probability distribution (pdf) of errors is followed in identical twin studies using the COLA T63 AGCM, integrated with observed SST for 15 recent winters. 30 integrations per winter (for 15 winters) are available with initial errors that are extremely small. The evolution of the pdf is tested for multi-modality, and the results interpreted in terms of clusters / regimes found in: (a) the set of 15x30 integrations mentioned, and (b) a larger ensemble of 55x15 integrations made with the same GCM using the same SSTs. The mapping of pdf evolution and clusters is also carried out for each winter separately, using the clusters found in the 55-member ensemble for the same winter alone. This technique yields information on the change in regimes caused by different boundary forcing (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). Analysis of the growing errors in terms of baroclinic and barotropic components allows for interpretation of the corresponding instabilities.

  19. Characteristics of patients making serious inhaler errors with a dry powder inhaler and association with asthma-related events in a primary care setting

    PubMed Central

    Westerik, Janine A. M.; Carter, Victoria; Chrystyn, Henry; Burden, Anne; Thompson, Samantha L.; Ryan, Dermot; Gruffydd-Jones, Kevin; Haughney, John; Roche, Nicolas; Lavorini, Federico; Papi, Alberto; Infantino, Antonio; Roman-Rodriguez, Miguel; Bosnic-Anticevich, Sinthia; Lisspers, Karin; Ställberg, Björn; Henrichsen, Svein Høegh; van der Molen, Thys; Hutton, Catherine; Price, David B.

    2016-01-01

    Abstract Objective: Correct inhaler technique is central to effective delivery of asthma therapy. The study aim was to identify factors associated with serious inhaler technique errors and their prevalence among primary care patients with asthma using the Diskus dry powder inhaler (DPI). Methods: This was a historical, multinational, cross-sectional study (2011–2013) using the iHARP database, an international initiative that includes patient- and healthcare provider-reported questionnaires from eight countries. Patients with asthma were observed for serious inhaler errors by trained healthcare providers as predefined by the iHARP steering committee. Multivariable logistic regression, stepwise reduced, was used to identify clinical characteristics and asthma-related outcomes associated with ≥1 serious errors. Results: Of 3681 patients with asthma, 623 (17%) were using a Diskus (mean [SD] age, 51 [14]; 61% women). A total of 341 (55%) patients made ≥1 serious errors. The most common errors were the failure to exhale before inhalation, insufficient breath-hold at the end of inhalation, and inhalation that was not forceful from the start. Factors significantly associated with ≥1 serious errors included asthma-related hospitalization the previous year (odds ratio [OR] 2.07; 95% confidence interval [CI], 1.26–3.40); obesity (OR 1.75; 1.17–2.63); poor asthma control the previous 4 weeks (OR 1.57; 1.04–2.36); female sex (OR 1.51; 1.08–2.10); and no inhaler technique review during the previous year (OR 1.45; 1.04–2.02). Conclusions: Patients with evidence of poor asthma control should be targeted for a review of their inhaler technique even when using a device thought to have a low error rate. PMID:26810934

  20. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  1. Negotiation Moves and Recasts in Relation to Error Types and Learner Repair in the Foreign Language Classroom.

    ERIC Educational Resources Information Center

    Morris, Frank A.

    2002-01-01

    Assessed the provision and use of implicit negative feedback in the interactional context of adult beginning learners of Spanish working in dyads in the foreign language classroom. Relationships among error types, feedback types, and immediate learner repair were also examined. Findings indicate learners did not provide explicit negative feedback…

  2. Early Career Teachers' Ability to Focus on Typical Students Errors in Relation to the Complexity of a Mathematical Topic

    ERIC Educational Resources Information Center

    Pankow, Lena; Kaiser, Gabriele; Busse, Andreas; König, Johannes; Blömeke, Sigrid; Hoth, Jessica; Döhrmann, Martina

    2016-01-01

    The paper presents results from a computer-based assessment in which 171 early career mathematics teachers from Germany were asked to anticipate typical student errors on a given mathematical topic and identify them under time constraints. Fast and accurate perception and knowledge-based judgments are widely accepted characteristics of teacher…

  3. Achieving seventh-order amplitude accuracy in leapfrog integrations

    NASA Astrophysics Data System (ADS)

    Williams, Paul

    2015-04-01

    The leapfrog time-stepping scheme is commonly used in general circulation models of weather and climate. The Robert-Asselin filter is used in conjunction with it, to damp the computational mode. Although the leapfrog scheme makes no amplitude errors when integrating linear oscillations, the Robert-Asselin filter introduces first-order amplitude errors. The RAW filter, which was recently proposed as an improvement, eliminates the first-order amplitude errors and yields third-order amplitude accuracy. This development has been shown to significantly increase the skill of medium-range weather forecasts. However, it has not previously been shown how to further improve the accuracy by eliminating the third- and higher-order amplitude errors. This presentation will show that leapfrogging over a suitably weighted blend of the filtered and unfiltered tendencies eliminates the third-order amplitude errors and yields fifth-order amplitude accuracy. It will also show that the use of a more discriminating (1,-4,6,-4,1) filter instead of a (1,-2,1) filter eliminates the fifth-order amplitude errors and yields seventh-order amplitude accuracy. Other related schemes are obtained by varying the values of the filter parameters, and it is found that several combinations offer an appealing compromise of stability and accuracy. The proposed new schemes are shown to yield substantial forecast improvements in a medium-complexity atmospheric general circulation model. They appear to be attractive alternatives to the filtered leapfrog schemes currently used in many weather and climate models. Reference Williams PD (2013) Achieving seventh-order amplitude accuracy in leapfrog integrations. Monthly Weather Review 141(9), pp 3037-3051. DOI: 10.1175/MWR-D-12-00303.1

  4. Relative accuracy of grid references derived from postcode and address in UK epidemiological studies of overhead power lines.

    PubMed

    Swanson, J; Vincent, T J; Bunch, K J

    2014-12-01

    In the UK, the location of an address, necessary for calculating the distance to overhead power lines in epidemiological studies, is available from different sources. We assess the accuracy of each. The grid reference specific to each address, provided by the Ordnance Survey product Address-Point, is generally accurate to a few metres, which will usually be sufficient for calculating magnetic fields from the power lines. The grid reference derived from the postcode rather than the individual address is generally accurate to tens of metres, and may be acceptable for assessing effects that vary in the general proximity of the power line, but is probably not acceptable for assessing magnetic-field effects. PMID:25325707

  5. Relating indices of knowledge structure coherence and accuracy to skill-based performance: Is there utility in using a combination of indices?

    PubMed

    Schuelke, Matthew J; Day, Eric Anthony; McEntire, Lauren E; Boatman, Jazmine Espejo; Wang, Xiaoqian; Kowollik, Vanessa; Boatman, Paul R

    2009-07-01

    The authors examined the relative criterion-related validity of knowledge structure coherence and two accuracy-based indices (closeness and correlation) as well as the utility of using a combination of knowledge structure indices in the prediction of skill acquisition and transfer. Findings from an aggregation of 5 independent samples (N = 958) whose participants underwent training on a complex computer simulation indicated that coherence and the accuracy-based indices yielded comparable zero-order predictive validities. Support for the incremental validity of using a combination of indices was mixed; the most, albeit small, gain came in pairing coherence and closeness when predicting transfer. After controlling for baseline skill, general mental ability, and declarative knowledge, only coherence explained a statistically significant amount of unique variance in transfer. Overall, the results suggested that the different indices largely overlap in their representation of knowledge organization, but that coherence better reflects adaptable aspects of knowledge organization important to skill transfer. PMID:19594246

  6. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  7. Refractive errors induced by displacement of intraocular lenses within the pseudophakic eye.

    PubMed

    Atchison, D A

    1989-03-01

    Simple methods were developed to estimate refractive errors when intraocular lenses are not fitted optimally within pseudophakic eyes. The accuracy of these methods was determined by comparing results obtained with them to results obtained by raytracing through a model eye. Accuracy was good for longitudinal displacement and tilting, and reasonable for transverse displacement. Refractive errors are related linearly to the magnitude of the longitudinal displacement, and are related to the square of the magnitude of tilt or transverse displacement. The refractive error upon transverse displacement is quadratically dependent upon lens shape. PMID:2717142

  8. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  9. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  10. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  11. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  12. Female Genital Mutilation in Sierra Leone: Forms, Reliability of Reported Status, and Accuracy of Related Demographic and Health Survey Questions

    PubMed Central

    Grant, Donald S.; Berggren, Vanja

    2013-01-01

    Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12–47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

  13. Female genital mutilation in sierra leone: forms, reliability of reported status, and accuracy of related demographic and health survey questions.

    PubMed

    Bjälkander, Owolabi; Grant, Donald S; Berggren, Vanja; Bathija, Heli; Almroth, Lars

    2013-01-01

    Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12-47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

  14. Calibration for the errors resulted from aberration in long focal length measurement

    NASA Astrophysics Data System (ADS)

    Yao, Jiang; Luo, Jia; He, Fan; Bai, Jian; Wang, Kaiwei; Hou, Xiyun; Hou, Changlun

    2014-09-01

    In this paper, a high-accuracy calibration method for errors resulted from aberration in long focal length measurement, is presented. Generally, Gaussian Equation is used for calculation without consideration of the errors caused by aberration. However, the errors are the key factor affecting the accuracy in the measurement system of a large aperture and long focal length lens. We creatively introduce an effective way to calibrate the errors, with detailed analysis of the long focal length measurement based on divergent light and Talbot interferometry. Aberration errors are simulated by Zemax. Then, we achieve auto-correction with the help of Visual C++ software and the experimental results reveal that the relative accuracy is better than 0.01%.By comparing modified values with experimental results obtained in knife-edge testing measurement, the proposed method is proved to be highly effective and reliable.

  15. High accuracy radiation efficiency measurement techniques

    NASA Technical Reports Server (NTRS)

    Kozakoff, D. J.; Schuchardt, J. M.

    1981-01-01

    The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.

  16. SU-E-J-19: Accuracy of Dual-Energy CT-Derived Relative Electron Density for Proton Therapy Dose Calculation

    SciTech Connect

    Mullins, J; Duan, X; Kruse, J; Herman, M; Bues, M

    2014-06-01

    Purpose: To determine the suitability of dual-energy CT (DECT) to calculate relative electron density (RED) of tissues for accurate proton therapy dose calculation. Methods: DECT images of RED tissue surrogates were acquired at 80 and 140 kVp. Samples (RED=0.19−2.41) were imaged in a water-equivalent phantom in a variety of configurations. REDs were calculated using the DECT numbers and inputs of the high and low energy spectral weightings. DECT-derived RED was compared between geometric configurations and for variations in the spectral inputs to assess the sensitivity of RED accuracy versus expected values. Results: RED accuracy was dependent on accurate spectral input influenced by phantom thickness and radius from the phantom center. Material samples located at the center of the phantom generally showed the best agreement to reference RED values, but only when attenuation of the surrounding phantom thickness was accounted for in the calculation spectra. Calculated RED changed by up to 10% for some materials when the sample was located at an 11 cm radius from the phantom center. Calculated REDs under the best conditions still differed from reference values by up to 5% in bone and 14% in lung. Conclusion: DECT has previously been used to differentiate tissue types based on RED and Z for binary tissue-type segmentation. To improve upon the current standard of empirical conversion of CT number to RED for treatment planning dose calculation, DECT methods must be able to calculate RED to better than 3% accuracy throughout the image. The DECT method is sensitive to the accuracy of spectral inputs used for calculation, as well as to spatial position in the anatomy. Effort to address adjustments to the spectral calculation inputs based on position and phantom attenuation will be required before DECT-determined RED can achieve a consistent level of accuracy for application in dose calculation.

  17. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    NASA Astrophysics Data System (ADS)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  18. Flow measurement by cardiovascular magnetic resonance: a multi-centre multi-vendor study of background phase offset errors that can compromise the accuracy of derived regurgitant or shunt flow measurements

    PubMed Central

    2010-01-01

    Aims Cardiovascular magnetic resonance (CMR) allows non-invasive phase contrast measurements of flow through planes transecting large vessels. However, some clinically valuable applications are highly sensitive to errors caused by small offsets of measured velocities if these are not adequately corrected, for example by the use of static tissue or static phantom correction of the offset error. We studied the severity of uncorrected velocity offset errors across sites and CMR systems. Methods and Results In a multi-centre, multi-vendor study, breath-hold through-plane retrospectively ECG-gated phase contrast acquisitions, as are used clinically for aortic and pulmonary flow measurement, were applied to static gelatin phantoms in twelve 1.5 T CMR systems, using a velocity encoding range of 150 cm/s. No post-processing corrections of offsets were implemented. The greatest uncorrected velocity offset, taken as an average over a 'great vessel' region (30 mm diameter) located up to 70 mm in-plane distance from the magnet isocenter, ranged from 0.4 cm/s to 4.9 cm/s. It averaged 2.7 cm/s over all the planes and systems. By theoretical calculation, a velocity offset error of 0.6 cm/s (representing just 0.4% of a 150 cm/s velocity encoding range) is barely acceptable, potentially causing about 5% miscalculation of cardiac output and up to 10% error in shunt measurement. Conclusion In the absence of hardware or software upgrades able to reduce phase offset errors, all the systems tested appeared to require post-acquisition correction to achieve consistently reliable breath-hold measurements of flow. The effectiveness of offset correction software will still need testing with respect to clinical flow acquisitions. PMID:20074359

  19. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    -arc VMAT plans for low-risk prostate are relatively insensitive to many potential delivery errors.

  20. The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

    NASA Astrophysics Data System (ADS)

    Ji, X.; Niu, X.

    2014-04-01

    With the widespread national survey of geographic conditions, object-based data has already became the most common data organization pattern in the area of land cover research. Assessing the accuracy of object-based land cover data is related to lots of processes of data production, such like the efficiency of inside production and the quality of final land cover data. Therefore,there are a great deal of requirements of accuracy assessment of object-based classification map. Traditional approaches for accuracy assessment in surveying and mapping are not aimed at land cover data. It is necessary to employ the accuracy assessment in imagery classification. However traditional pixel-based accuracy assessing methods are inadequate for the requirements. The measures we improved are based on error matrix and using objects as sample units, because the pixel sample units are not suitable for assessing the accuracy of object-based classification result. Compared to pixel samples, we realize that the uniformity of object samples has changed. In order to make the indexes generating from error matrix reliable, we using the areas of object samples as the weight to establish the error matrix of object-based image classification map. We compare the result of two error matrixes setting up by the number of object samples and the sum of area of object samples. The error matrix using the sum of area of object sample is proved to be an intuitive, useful technique for reflecting the actual accuracy of object-based imagery classification result.

  1. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  2. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  3. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  4. On-ward participation of a hospital pharmacist in a Dutch intensive care unit reduces prescribing errors and related patient harm: an intervention study

    PubMed Central

    2010-01-01

    Introduction Patients admitted to an intensive care unit (ICU) are at high risk for prescribing errors and related adverse drug events (ADEs). An effective intervention to decrease this risk, based on studies conducted mainly in North America, is on-ward participation of a clinical pharmacist in an ICU team. As the Dutch Healthcare System is organized differently and the on-ward role of hospital pharmacists in Dutch ICU teams is not well established, we conducted an intervention study to investigate whether participation of a hospital pharmacist can also be an effective approach in reducing prescribing errors and related patient harm (preventable ADEs) in this specific setting. Methods A prospective study compared a baseline period with an intervention period. During the intervention period, an ICU hospital pharmacist reviewed medication orders for patients admitted to the ICU, noted issues related to prescribing, formulated recommendations and discussed those during patient review meetings with the attending ICU physicians. Prescribing issues were scored as prescribing errors when consensus was reached between the ICU hospital pharmacist and ICU physicians. Results During the 8.5-month study period, medication orders for 1,173 patients were reviewed. The ICU hospital pharmacist made a total of 659 recommendations. During the intervention period, the rate of consensus between the ICU hospital pharmacist and ICU physicians was 74%. The incidence of prescribing errors during the intervention period was significantly lower than during the baseline period: 62.5 per 1,000 monitored patient-days versus 190.5 per 1,000 monitored patient-days, respectively (P < 0.001). Preventable ADEs (patient harm, National Coordinating Council for Medication Error Reporting and Prevention severity categories E and F) were reduced from 4.0 per 1,000 monitored patient-days during the baseline period to 1.0 per 1,000 monitored patient-days during the intervention period (P = 0.25). Per

  5. Moderation of the Relationship Between Reward Expectancy and Prediction Error-Related Ventral Striatal Reactivity by Anhedonia in Unmedicated Major Depressive Disorder: Findings From the EMBARC Study

    PubMed Central

    Greenberg, Tsafrir; Chase, Henry W.; Almeida, Jorge R.; Stiffler, Richelle; Zevallos, Carlos R.; Aslam, Haris A.; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G.; Oquendo, Maria A.; McGrath, Patrick J.; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H.; Phillips, Mary L.

    2016-01-01

    Objective Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error-(discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. Method A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Results Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. Conclusions The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward

  6. Automated correction of genome sequence errors

    PubMed Central

    Gajer, Pawel; Schatz, Michael; Salzberg, Steven L.

    2004-01-01

    By using information from an assembly of a genome, a new program called AutoEditor significantly improves base calling accuracy over that achieved by previous algorithms. This in turn improves the overall accuracy of genome sequences and facilitates the use of these sequences for polymorphism discovery. We describe the algorithm and its application in a large set of recent genome sequencing projects. The number of erroneous base calls in these projects was reduced by 80%. In an analysis of over one million corrections, we found that AutoEditor made just one error per 8828 corrections. By substantially increasing the accuracy of base calling, AutoEditor can dramatically accelerate the process of finishing genomes, which involves closing all gaps and ensuring minimum quality standards for the final sequence. It also greatly improves our ability to discover single nucleotide polymorphisms (SNPs) between closely related strains and isolates of the same species. PMID:14744981

  7. Intracortical Posterior Cingulate Myelin Content Relates to Error Processing: Results from T1- and T2-Weighted MRI Myelin Mapping and Electrophysiology in Healthy Adults.

    PubMed

    Grydeland, Håkon; Westlye, Lars T; Walhovd, Kristine B; Fjell, Anders M

    2016-06-01

    Myelin content of the cerebral cortex likely impacts cognitive functioning, but this notion has scarcely been investigated in vivo in humans. Here we tested for a relationship between intracortical myelin and a direct measure of neural activity in the form of the electrophysiological response error-related negativity (ERN). Using magnetic resonance imaging, myelin mapping was performed in 81 healthy adults aged 40-60 years by means of a T1- and T2-weighted (T1w/T2w) signal intensity ratio approach. Error trials on a version of the Eriksen flanker task triggered the ERN, a negative deflection of the event-related potential reflecting performance monitoring. Compelling evidence from neuroimaging, lesion, and source localization studies indicates that the ERN stems from the cingulate cortex. Vertex-wise analyses across the cingulate demonstrated that increased amplitude of the ERN was related to higher levels of intracortical myelin in the left posterior cingulate cortex. The association was independent of general ability level and subjacent white matter myelin. The results fit the notion that degree of myelin within the posterior cingulate cortex as measured by T1w/T2w signal intensity plays a role in error processing and cognitive control through the relationship with neural activity as measured by ERN amplitude, potentially by facilitating local neural synchronization. PMID:25840423

  8. Quality and accuracy of publicly accessible cancer-related physical activity information on the Internet: a cross-sectional assessment.

    PubMed

    Buote, R D; Malone, S D; Bélanger, L J; McGowan, E L

    2016-09-01

    In this study, we assessed the quality of publicly available cancer-related physical activity (PA) information appearing on reputable sites from Canada and other English-speaking countries. A cross-sectional Internet search was conducted on select countries (Canada, USA, Australia, New Zealand, UK) using Google to generate top 50 results per country for the keywords "'physical activity' AND 'cancer'". Top results were assessed for quality of PA information based on a coding frame. Additional searches were performed for Canadian-based sites to produce an exhaustive list. Results found that many sites offered cancer-related PA information (94.5%), but rarely defined PA (25.2%). Top 50 results from each country did not differ on any indicator examined. The exhaustive list of Canadian sites found that many sites gave information about PA for survivorship (78.3%) and prevention (70.0%), but rarely defined (6.7%) or referenced PA guidelines (28.3%). Cancer-related PA information is plentiful on the Internet but the quality needs improvement. Sites should do more than mention PA; they should provide definitions, examples and guidelines. With improvements, these websites would enable healthcare providers to effectively educate their patients about PA, and serve as a valuable resource to the general public who may be seeking cancer-related PA information. PMID:27283004

  9. Medial Prefrontal Functional Connectivity--Relation to Memory Self-Appraisal Accuracy in Older Adults with and without Memory Disorders

    ERIC Educational Resources Information Center

    Ries, Michele L.; McLaren, Donald G.; Bendlin, Barbara B.; Xu, Guofan; Rowley, Howard A.; Birn, Rasmus; Kastman, Erik K.; Sager, Mark A.; Asthana, Sanjay; Johnson, Sterling C.

    2012-01-01

    It is tentatively estimated that 25% of people with early Alzheimer's disease (AD) show impaired awareness of disease-related changes in their own cognition. Research examining both normative self-awareness and altered awareness resulting from brain disease or injury points to the central role of the medial prefrontal cortex (MPFC) in generating…

  10. Prospective Relations among Fearful Temperament, Protective Parenting, and Social Withdrawal: The Role of Maternal Accuracy in a Moderated Mediation Framework

    ERIC Educational Resources Information Center

    Kiel, Elizabeth J.; Buss, Kristin A.

    2011-01-01

    Early social withdrawal and protective parenting predict a host of negative outcomes, warranting examination of their development. Mothers' accurate anticipation of their toddlers' fearfulness may facilitate transactional relations between toddler fearful temperament and protective parenting, leading to these outcomes. Currently, we followed 93…

  11. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  12. Maternal Expectations for Toddlers’ Reactions to Novelty: Relations of Maternal Internalizing Symptoms and Parenting Dimensions to Expectations and Accuracy of Expectations

    PubMed Central

    Kiel, Elizabeth J.; Buss, Kristin A.

    2010-01-01

    SYNOPSIS Objective Although maternal internalizing symptoms and parenting dimensions have been linked to reports and perceptions of children’s behavior, it remains relatively unknown whether these characteristics relate to expectations or the accuracy of expectations for toddlers’ responses to novel situations. Design A community sample of 117 mother-toddler dyads participated in a laboratory visit and questionnaire completion. At the laboratory, mothers were interviewed about their expectations for their toddlers’ behaviors in a variety of novel tasks; toddlers then participated in these activities, and trained coders scored their behaviors. Mothers completed questionnaires assessing demographics, depressive and worry symptoms, and parenting dimensions. Results Mothers who reported more worry expected their toddlers to display more fearful behavior during the laboratory tasks, but worry did not moderate how accurately maternal expectations predicted toddlers’ observed behavior. When also reporting a low level of authoritative-responsive parenting, maternal depressive symptoms moderated the association between maternal expectations and observed toddler behavior, such that, as depressive symptoms increased, maternal expectations related less strongly to toddler behavior. Conclusions When mothers were asked about their expectations for their toddlers’ behavior in the same novel situations from which experimenters observe this behavior, symptoms and parenting had minimal effect on the accuracy of mothers’ expectations. When in the context of low authoritative-responsive parenting, however, depressive symptoms related to less accurate predictions of their toddlers’ fearful behavior. PMID:21037974

  13. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  14. Pre-Departure Clearance (PDC): An Analysis of Aviation Safety Reporting System Reports Concerning PDC Related Errors

    NASA Technical Reports Server (NTRS)

    Montalyo, Michael L.; Lebacqz, J. Victor (Technical Monitor)

    1994-01-01

    Airlines operating in the United States are required to operate under instrument flight rules (EFR). Typically, a clearance is issued via voice transmission from clearance delivery at the departing airport. In 1990, the Federal Aviation Administration (FAA) began deployment of the Pre-Departure Clearance (PDC) system at 30 U.S. airports. The PDC system utilizes aeronautical datalink and Aircraft Communication and Reporting System (ACARS) to transmit departure clearances directly to the pilot. An objective of the PDC system is to provide an immediate reduction in voice congestion over the clearance delivery frequency. Participating airports report that this objective has been met. However, preliminary analysis of 42 Aviation Safety Reporting System (ASRS) reports has revealed problems in PDC procedures and formatting which have caused errors in the proper execution of the clearance. It must be acknowledged that this technology, along with other advancements on the flightdeck, is adding more responsibility to the crew and increasing the opportunity for error. The present study uses these findings as a basis for further coding and analysis of an additional 82 reports obtained from an ASRS database search. These reports indicate that clearances are often amended or exceptions are added in order to accommodate local ATC facilities. However, the onboard ACARS is limited in its ability to emphasize or highlight these changes which has resulted in altitude and heading deviations along with increases in ATC workload. Furthermore, few participating airports require any type of PDC receipt confirmation. In fact, 35% of all ASRS reports dealing with PDC's include failure to acquire the PDC at all. Consequently, this study examines pilots' suggestions contained in ASRS reports in order to develop recommendations to airlines and ATC facilities to help reduce the amount of incidents that occur.

  15. Accuracy of Self-Reported Screening Mammography Use: Examining Recall among Female Relatives from the Ontario Site of the Breast Cancer Family Registry

    PubMed Central

    Walker, Meghan J.; Chiarelli, Anna M.; Glendon, Gord; Ritvo, Paul; Andrulis, Irene L.; Knight, Julia A.

    2013-01-01

    Evidence of the accuracy of self-reported mammography use among women with familial breast cancer risk is limited. This study examined the accuracy of self-reported screening mammography dates in a cohort of 1,114 female relatives of breast cancer cases, aged 26 to 73 from the Ontario site of the Breast Cancer Family Registry. Self-reported dates were compared to dates abstracted from imaging reports. Associations between inaccurate recall and subject characteristics were assessed using multinomial regression. Almost all women (95.2% at baseline, 98.5% at year 1, 99.8% at year 2) accurately reported their mammogram use within the previous 12 months. Women at low familial risk (OR = 1.77, 95% CI: 1.00–3.13), who reported 1 or fewer annual visits to a health professional (OR = 1.97, 95% CI: 1.15, 3.39), exhibited a lower perceived breast cancer risk (OR = 1.90, 95% CI: 1.15, 3.15), and reported a mammogram date more than 12 months previous (OR = 5.22, 95% CI: 3.10, 8.80), were significantly more likely to inaccurately recall their mammogram date. Women with varying levels of familial risk are accurate reporters of their mammogram use. These results present the first evidence of self-reported mammography recall accuracy among women with varying levels of familial risk. PMID:23984098

  16. The slider motion error analysis by positive solution method in parallel mechanism

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoqing; Zhang, Lisong; Zhu, Liang; Yang, Wenguo; Hu, Penghao

    2016-01-01

    Motion error of slider plays key role in 3-PUU parallel coordinates measuring machine (CMM) performance and influence the CMM accuracy, which attracts lots of experts eyes in the world, Generally, the analysis method is based on the view of space 6-DOF. Here, a new analysis method is provided. First, the structure relation of slider and guideway can be abstracted as a 4-bar parallel mechanism. So, the sliders can be considered as moving platform in parallel kinematic mechanism PKM. Its motion error analysis is also transferred to moving platform position analysis in PKM. Then, after establishing the positive and negative solutions, some existed theory and technology for PKM can be applied to analyze slider straightness motion error and angular motion error simultaneously. Thirdly, some experiments by autocollimator are carried out to capture the original error data about guideway its own error, the data can be described as straightness error function by fitting curvilinear equation. Finally, the Straightness error of two guideways are considered as the variation of rod length in parallel mechanism, the slider's straightness error and angular error can be obtained by putting data into the established model. The calculated result is generally consistent with experiment result. The idea will be beneficial on accuracy calibration and error correction of 3-PUU CMM and also provides a new thought to analyze kinematic error of guideway in precision machine tool and precision instrument.

  17. An Implicit Measure of Associations with Mental Illness versus Physical Illness: Response Latency Decomposition and Stimuli Differential Functioning in Relation to IAT Order of Associative Conditions and Accuracy

    PubMed Central

    Mannarini, Stefania; Boffo, Marilisa

    2014-01-01

    The present study aimed at the definition of a latent measurement dimension underlying an implicit measure of automatic associations between the concept of mental illness and the psychosocial and biogenetic causal explanatory attributes. To this end, an Implicit Association Test (IAT) assessing the association between the Mental Illness and Physical Illness target categories to the Psychological and Biologic attribute categories, representative of the causal explanation domains, was developed. The IAT presented 22 stimuli (words and pictures) to be categorized into the four categories. After 360 university students completed the IAT, a Many-Facet Rasch Measurement (MFRM) modelling approach was applied. The model specified a person latency parameter and a stimulus latency parameter. Two additional parameters were introduced to denote the order of presentation of the task associative conditions and the general response accuracy. Beyond the overall definition of the latent measurement dimension, the MFRM was also applied to disentangle the effect of the task block order and the general response accuracy on the stimuli response latency. Further, the MFRM allowed detecting any differential functioning of each stimulus in relation to both block ordering and accuracy. The results evidenced: a) the existence of a latency measurement dimension underlying the Mental Illness versus Physical Illness - Implicit Association Test; b) significant effects of block order and accuracy on the overall latency; c) a differential functioning of specific stimuli. The results of the present study can contribute to a better understanding of the functioning of an implicit measure of semantic associations with mental illness and give a first blueprint for the examination of relevant issues in the development of an IAT. PMID:25000406

  18. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  19. Abnormal error processing in depressive states: a translational examination in humans and rats.

    PubMed

    Beard, C; Donahue, R J; Dillon, D G; Van't Veer, A; Webber, C; Lee, J; Barrick, E; Hsu, K J; Foti, D; Carroll, F I; Carlezon, W A; Björgvinsson, T; Pizzagalli, D A

    2015-01-01

    Depression has been associated with poor performance following errors, but the clinical implications, response to treatment and neurobiological mechanisms of this post-error behavioral adjustment abnormality remain unclear. To fill this gap in knowledge, we tested depressed patients in a partial hospital setting before and after treatment (cognitive behavior therapy combined with medication) using a flanker task. To evaluate the translational relevance of this metric in rodents, we performed a secondary analysis on existing data from rats tested in the 5-choice serial reaction time task after treatment with corticotropin-releasing factor (CRF), a stress peptide that produces depressive-like signs in rodent models relevant to depression. In addition, to examine the effect of treatment on post-error behavior in rodents, we examined a second cohort of rodents treated with JDTic, a kappa-opioid receptor antagonist that produces antidepressant-like effects in laboratory animals. In depressed patients, baseline post-error accuracy was lower than post-correct accuracy, and, as expected, post-error accuracy improved with treatment. Moreover, baseline post-error accuracy predicted attentional control and rumination (but not depressive symptoms) after treatment. In rats, CRF significantly degraded post-error accuracy, but not post-correct accuracy, and this effect was attenuated by JDTic. Our findings demonstrate deficits in post-error accuracy in depressed patients, as well as a rodent model relevant to depression. These deficits respond to intervention in both species. Although post-error behavior predicted treatment-related changes in attentional control and rumination, a relationship to depressive symptoms remains to be demonstrated. PMID:25966364

  20. Abnormal error processing in depressive states: a translational examination in humans and rats

    PubMed Central

    Beard, C; Donahue, R J; Dillon, D G; Van't Veer, A; Webber, C; Lee, J; Barrick, E; Hsu, K J; Foti, D; Carroll, F I; Carlezon Jr, W A; Björgvinsson, T; Pizzagalli, D A

    2015-01-01

    Depression has been associated with poor performance following errors, but the clinical implications, response to treatment and neurobiological mechanisms of this post-error behavioral adjustment abnormality remain unclear. To fill this gap in knowledge, we tested depressed patients in a partial hospital setting before and after treatment (cognitive behavior therapy combined with medication) using a flanker task. To evaluate the translational relevance of this metric in rodents, we performed a secondary analysis on existing data from rats tested in the 5-choice serial reaction time task after treatment with corticotropin-releasing factor (CRF), a stress peptide that produces depressive-like signs in rodent models relevant to depression. In addition, to examine the effect of treatment on post-error behavior in rodents, we examined a second cohort of rodents treated with JDTic, a kappa-opioid receptor antagonist that produces antidepressant-like effects in laboratory animals. In depressed patients, baseline post-error accuracy was lower than post-correct accuracy, and, as expected, post-error accuracy improved with treatment. Moreover, baseline post-error accuracy predicted attentional control and rumination (but not depressive symptoms) after treatment. In rats, CRF significantly degraded post-error accuracy, but not post-correct accuracy, and this effect was attenuated by JDTic. Our findings demonstrate deficits in post-error accuracy in depressed patients, as well as a rodent model relevant to depression. These deficits respond to intervention in both species. Although post-error behavior predicted treatment-related changes in attentional control and rumination, a relationship to depressive symptoms remains to be demonstrated. PMID:25966364

  1. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection

    NASA Astrophysics Data System (ADS)

    Buschang, Rebecca Ellen

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e., experimental condition). A pretest of content knowledge was administered, and then the participants in both conditions watched two hours of online videos about natural selection and attended different types of professional development sessions lasting four hours. The dependent variable measured teacher knowledge and skill related to evaluating student errors and understanding of natural selection. Significant differences between conditions in favor of the experimental condition were found on participant identification of critical elements of student understanding of natural selection and content knowledge related to natural selection. Results suggest that short-term professional development sessions focused on evaluating student errors and understanding can be effective at focusing a participant's evaluation of student work on particularly important elements of student understanding. Results have implications for understanding the types of knowledge necessary to effectively evaluate student work and for the design of professional development.

  2. A model for calculating the errors of 2D bulk analysis relative to the true 3D bulk composition of an object, with application to chondrules

    NASA Astrophysics Data System (ADS)

    Hezel, Dominik C.

    2007-09-01

    Certain problems in Geosciences require knowledge of the chemical bulk composition of objects, such as, for example, minerals or lithic clasts. This 3D bulk chemical composition (bcc) is often difficult to obtain, but if the object is prepared as a thin or thick polished section a 2D bcc can be easily determined using, for example, an electron microprobe. The 2D bcc contains an error relative to the true 3D bcc that is unknown. Here I present a computer program that calculates this error, which is represented as the standard deviation of the 2D bcc relative to the real 3D bcc. A requirement for such calculations is an approximate structure of the 3D object. In petrological applications, the known fabrics of rocks facilitate modeling. The size of the standard deviation depends on (1) the modal abundance of the phases, (2) the element concentration differences between phases and (3) the distribution of the phases, i.e. the homogeneity/heterogeneity of the object considered. A newly introduced parameter " τ" is used as a measure of this homogeneity/heterogeneity. Accessory phases, which do not necessarily appear in 2D thin sections, are a second source of error, in particular if they contain high concentrations of specific elements. An abundance of only 1 vol% of an accessory phase may raise the 3D bcc of an element by up to a factor of ˜8. The code can be queried as to whether broad beam, point, line or area analysis technique is best for obtaining 2D bcc. No general conclusion can be deduced, as the error rates of these techniques depend on the specific structure of the object considered. As an example chondrules—rapidly solidified melt droplets of chondritic meteorites—are used. It is demonstrated that 2D bcc may be used to reveal trends in the chemistry of 3D objects.

  3. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  4. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    SciTech Connect

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  5. Virtual and Actual: Relative Accuracy of On-Site and Web-based Instruments in Auditing the Environment for Physical Activity

    PubMed Central

    Ben-Joseph, Eran; Lee, Jae Seung; Cromley, Ellen K.; Laden, Francine; Troped, Philip J.

    2015-01-01

    Objectives To assess the relative accuracy and usefulness of web tools in evaluating and measuring street-scale built environment characteristics. Methods A well-known audit tool was used to evaluate 84 street segments at the urban edge of metropolitan Boston, Massachusetts, using on-site visits and three web-based tools. The assessments were compared to evaluate their relative accuracy and usefulness. Results Web-based audits, based-on Google Maps, Google Street View, and MS Visual Oblique, tend to strongly agree with on-site audits on land-use and transportation characteristics (e.g., types of buildings, commercial destinations, and streets). However, the two approaches to conducting audits (web versus on-site) tend to agree only weakly on fine-grain, temporal, and qualitative environmental elements. Among the web tools used, auditors rated MS Visual Oblique as the most valuable. Yet Street View tends to be rated as the most useful in measuring fine-grain features, such as levelness and condition of sidewalks. Conclusion While web-based tools do not offer a perfect substitute for on-site audits, they allow for preliminary audits to be performed accurately from remote locations, potentially saving time and cost and increasing the effectiveness of subsequent on-site visits. PMID:23247423

  6. Accuracy of cloud liquid water path from ground-based microwave radiometry 2. Sensor accuracy and synergy

    NASA Astrophysics Data System (ADS)

    Crewell, Susanne; LöHnert, Ulrich

    2003-06-01

    The influence of microwave radiometer accuracy on retrieved cloud liquid water path (LWP) was investigated. Sensor accuracy was assumed to be the sum of the relative (i.e., Gaussian noise) and the absolute accuracies of brightness temperatures. When statistical algorithms are developed the assumed noise should be as close as possible to the real measurements in order to avoid artifacts in the retrieved LWP distribution. Typical offset errors of 1 K in brightness temperatures can produce mean LWP errors of more than 30 g m-2 for a two-channel radiometer retrieval, although positively correlated brightness temperature offsets in both channels reduce this error to 16 g m-2. Large improvements in LWP retrieval accuracy of about 50% can be achieved by adding a 90-GHz channel to the two-channel retrieval. The inclusion of additional measurements, like cloud base height from a lidar ceilometer and cloud base temperature from an infrared radiometer, is invaluable in detecting cloud free scenes allowing an indirect evaluation of LWP accuracy in clear sky cases. This method was used to evaluate LWP retrieval algorithms based on different gas absorption models. Using two months of measurements, the Liebe 93 model provided the best results when the 90-GHz channel was incorporated into the standard two-channel retrievals.

  7. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually. PMID:21923052

  8. Investigation of Error Patterns in Geographical Databases

    NASA Technical Reports Server (NTRS)

    Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)

    2002-01-01

    The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.

  9. Combination of TOPEX/POSEIDON Data with a Hydrographic Inversion for Determination of the Oceanic General Circulation and its Relation to Geoid Accuracy

    NASA Technical Reports Server (NTRS)

    Ganachaud, Alexandre; Wunsch, Carl; Kim, Myung-Chan; Tapley, Byron

    1997-01-01

    A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceanographic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations. Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.

  10. Accuracy of wind measurements using an airborne Doppler lidar

    NASA Technical Reports Server (NTRS)

    Carroll, J. J.

    1986-01-01

    Simulated wind fields and lidar data are used to evaluate two sources of airborne wind measurement error. The system is sensitive to ground speed and track angle errors, with accuracy required of the angle to within 0.2 degrees and of the speed to within 1 knot, if the recovered wind field is to be within five percent of the correct direction and 10 percent of the correct speed. It is found that errors in recovered wind speed and direction are dependent on wind direction relative to the flight path. Recovery of accurate wind fields from nonsimultaneous sampling errors requires that the lidar data be displaced to account for advection so that the intersections are defined by air parcels rather than fixed points in space.

  11. Enhanced error-related brain activity in children predicts the onset of anxiety disorders between the ages of 6 and 9

    PubMed Central

    Meyer, Alexandria; Proudfit, Greg Hajcak; Torpey-Newman, Dana C.; Kujawa, Autumn; Klein, Daniel N.

    2015-01-01

    Considering that anxiety disorders frequently begin before adulthood and often result in chronic impairment, it is important to characterize the developmental pathways leading to the onset of clinical anxiety. Identifying neural biomarkers that can predict the onset of anxiety in childhood may increase our understanding of the etiopathogenesis of anxiety, as well as inform intervention and prevention strategies. An event-related potential (ERP), the error-related negativity (ERN) has been proposed as a biomarker of risk for anxiety and has previously been associated with concurrent anxiety in both adults and children. However, no previous study has examined whether the ERN can predict the onset of anxiety disorders. In the current study, ERPs were recorded while 236 healthy children, approximately 6 years of age, performed a Go/No-Go task to measure the ERN. Three years later, children and parents came back to the lab and completed diagnostic interviews regarding anxiety disorder status. Results indicated that enhanced error-related brain activity at age 6 predicted the onset of new anxiety disorders by age 9, even when controlling for baseline anxiety symptoms and maternal history of anxiety. Considering the potential utility of identifying early biomarkers of risk, this is a novel and important extension of previous work. PMID:25643204

  12. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  13. A Probabilistic Model for Students' Errors and Misconceptions on the Structure of Matter in Relation to Three Cognitive Variables

    ERIC Educational Resources Information Center

    Tsitsipis, Georgios; Stamovlasis, Dimitrios; Papageorgiou, George

    2012-01-01

    In this study, the effect of 3 cognitive variables such as logical thinking, field dependence/field independence, and convergent/divergent thinking on some specific students' answers related to the particulate nature of matter was investigated by means of probabilistic models. Besides recording and tabulating the students' responses, a combination…

  14. Reading skill and neural processing accuracy improvement after a 3-hour intervention in preschoolers with difficulties in reading-related skills.

    PubMed

    Lovio, Riikka; Halttunen, Anu; Lyytinen, Heikki; Näätänen, Risto; Kujala, Teija

    2012-04-11

    This study aimed at determining whether an intervention game developed for strengthening phonological awareness has a remediating effect on reading skills and central auditory processing in 6-year-old preschool children with difficulties in reading-related skills. After a 3-hour training only, these children made a greater progress in reading-related skills than did their matched controls who did mathematical exercises following comparable training format. Furthermore, the results suggest that this brief intervention might be beneficial in modulating the neural basis of phonetic discrimination as an enhanced speech-elicited mismatch negativity (MMN) was seen in the intervention group, indicating improved cortical discrimination accuracy. Moreover, the amplitude increase of the vowel-elicited MMN significantly correlated with the improvement in some of the reading-skill related test scores. The results, albeit obtained with a relatively small sample, are encouraging, suggesting that reading-related skills can be improved even by a very short intervention and that the training effects are reflected in brain activity. However, studies with larger samples and different subgroups of children are needed to confirm the present results and to determine how children with different dyslexia subtypes benefit from the intervention. PMID:22364735

  15. Predicting Hospitalization for Heat-Related Illness at the Census-Tract Level: Accuracy of a Generic Heat Vulnerability Index in Phoenix, Arizona (USA)

    PubMed Central

    Gober, Patricia

    2015-01-01

    Background Vulnerability mapping based on vulnerability indices is a pragmatic approach for highlighting the areas in a city where people are at the greatest risk of harm from heat, but the manner in which vulnerability is conceptualized influences the results. Objectives We tested a generic national heat-vulnerability index, based on a 10-variable indicator framework, using data on heat-related hospitalizations in Phoenix, Arizona. We also identified potential local risk factors not included in the generic indicators. Methods To evaluate the accuracy of the generic index in a city-specific context, we used factor scores, derived from a factor analysis using census tract–level characteristics, as independent variables, and heat hospitalizations (with census tracts categorized as zero-, moderate-, or high-incidence) as dependent variables in a multinomial logistic regression model. We also compared the geographical differences between a vulnerability map derived from the generic index and one derived from actual heat-related hospitalizations at the census-tract scale. Results We found that the national-indicator framework correctly classified just over half (54%) of census tracts in Phoenix. Compared with all census tracts, high-vulnerability tracts that were misclassified by the index as zero-vulnerability tracts had higher average income and higher proportions of residents with a duration of residency < 5 years. Conclusion The generic indicators of vulnerability are useful, but they are sensitive to scale, measurement, and context. Decision makers need to consider the characteristics of their cities to determine how closely vulnerability maps based on generic indicators reflect actual risk of harm. Citation Chuang WC, Gober P. 2015. Predicting hospitalization for heat-related illness at the census-tract level: accuracy of a generic heat vulnerability index in Phoenix, Arizona (USA). Environ Health Perspect 123:606–612; http://dx.doi.org/10.1289/ehp.1307868

  16. The Quantitative Relationship Between ISO 15197 Accuracy Criteria and Mean Absolute Relative Difference (MARD) in the Evaluation of Analytical Performance of Self-Monitoring of Blood Glucose (SMBG) Systems.

    PubMed

    Pardo, Scott; Simmons, David A

    2016-09-01

    The relationship between International Organization for Standardization (ISO) accuracy criteria and mean absolute relative difference (MARD), 2 methods for assessing the accuracy of blood glucose meters, is complex. While lower MARD values are generally better than higher MARD values, it is not possible to define a particular MARD value that ensures a blood glucose meter will satisfy the ISO accuracy criteria. The MARD value that ensures passing the ISO accuracy test can be described only as a probabilistic range. In this work, a Bayesian model is presented to represent the relationship between ISO accuracy criteria and MARD. Under the assumptions made in this work, there is nearly a 100% chance of satisfying ISO 15197:2013 accuracy requirements if the MARD value is between 3.25% and 5.25%. PMID:27118729

  17. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  18. Thematic accuracy of the NLCD 2001 land cover for the conterminous United States

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, C.G.

    2010-01-01

    The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.

  19. Accuracy issues in the finite difference time domain simulation of photomask scattering

    NASA Astrophysics Data System (ADS)

    Pistor, Thomas V.

    2001-09-01

    As the use of electromagnetic simulation in lithography increases, accuracy issues are uncovered and must be addressed. A proper understanding of these issues can allow the lithographer to avoid pitfalls in electromagnetic simulation and to know what can and can not be accurately simulated. This paper addresses the important accuracy issues related to the simulation of photomask scattering using the Finite Difference Time Domain (FDTD) method. Errors related to discretization and periodic boundary conditions are discussed. Discretization-related issues arise when derivatives are replaced by finite differences and when integrals are replaced by summations. These approximations can lead to mask features that do not have exact dimensions. The effects of discretization error on phase wells and thin films are shown. The reflectivity of certain thin film layers is seen to be very sensitive to the layer thickness. Simulation experiments and theory are used to determine how fine a discretization is necessary and various discretization schemes that help minimize error are presented. Boundary-condition-related errors arise from the use of periodic boundary conditions when simulating isolated mask features. The effects of periodic boundary conditions are assessed through the use of simulation experiments. All errors are associated with an ever-present trade-off between accuracy and computational resources. However, choosing the cell size wisely can, in many cases, minimize error without significantly increasing computation resource requirements.

  20. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  1. Elimination of 'ghost'-effect-related systematic error in metrology of X-ray optics with a long trace profiler

    SciTech Connect

    Yashchuk, Valeriy V.; Irick, Steve C.; MacDowell, Alastair A.

    2005-04-28

    A data acquisition technique and relevant program for suppression of one of the systematic effects, namely the ''ghost'' effect, of a second generation long trace profiler (LTP) is described. The ''ghost'' effect arises when there is an unavoidable cross-contamination of the LTP sample and reference signals into one another, leading to a systematic perturbation in the recorded interference patterns and, therefore, a systematic variation of the measured slope trace. Perturbations of about 1-2 {micro}rad have been observed with a cylindrically shaped X-ray mirror. Even stronger ''ghost'' effects show up in an LTP measurement with a mirror having a toroidal surface figure. The developed technique employs separate measurement of the ''ghost''-effect-related interference patterns in the sample and the reference arms and then subtraction of the ''ghost'' patterns from the sample and the reference interference patterns. The procedure preserves the advantage of simultaneously measuring the sample and reference signals. The effectiveness of the technique is illustrated with LTP metrology of a variety of X-ray mirrors.

  2. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  3. Error prediction for probes guided by means of fixtures

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, J. Michael

    2012-02-01

    Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.

  4. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  5. Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories

    NASA Technical Reports Server (NTRS)

    Green, S.; Grace, M.; Williams, D.

    1999-01-01

    The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major

  6. Surfing the Net for medical information about psychological trauma: An empirical study of the quality and accuracy of trauma-related websites

    PubMed Central

    BREMNER, J. DOUGLAS; QUINN, JOHN; QUINN, WILLIAM; VELEDAR, EMIR

    2011-01-01

    Psychological trauma is a major public-health problem, and trauma victims frequently turn to the Internet for medical information related to trauma. The Internet has many advantages for trauma victims, including low cost, privacy, use of access, and reduced direct social interactions. However, there are no regulations on what is posted on the Internet, or by whom, and little is known about the quality of information currently available related to the topic of psychological trauma. The purpose of this study was to evaluate the quality of Internet sites related to the topic of psychological trauma. The top 20 hits for searches on Google, AllTheWeb, and Yahoo were tabulated, using search words of ‘psychological trauma’, ‘stress’, ‘PTSD’, and ‘trauma’. From these searches, a list of 94 unique unsponsored hits that represented accessible websites was generated. Fourteen sites were unrelated or only peripherally related, and eight were related but were not comprehensively evaluated because they represented brochures, online book sales, etc. Seventy-two websites underwent evaluation of the content, design, disclosure, ease of use, and other factors based on published guidelines for medical information sites. Forty-two per cent of sites had inaccurate information, 82% did not provide a source of their information, and 41% did not use a mental-health professional in the development of the content. Ratings of content (e.g. accuracy, reliability, etc.) were 4 (2 SD) on a scale of 1 – 10, with 10 being the best. There were similar ratings for the other variables assessed. These findings suggest that although abundant, websites providing information about psychological trauma are often not useful, and can sometimes provide inaccurate and potentially harmful information to consumers of medical information. PMID:16954059

  7. Analysis of deformable image registration accuracy using computational modeling.

    PubMed

    Zhong, Hualiang; Kim, Jinkoo; Chetty, Indrin J

    2010-03-01

    selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously. PMID:20384233

  8. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.

  9. Insulin use: preventable errors.

    PubMed

    2014-01-01

    Insulin is vital for patients with type 1 diabetes and useful for certain patients with type 2 diabetes. The serious consequences of insulin-related medication errors are overdose, resulting in severe hypoglycaemia, causing seizures, coma and even death; or underdose, resulting in hyperglycaemia and sometimes ketoacidosis. Errors associated with the preparation and administration of insulin are often reported, both outside and inside the hospital setting. These errors are preventable. By analysing reports from organisations devoted to medication error prevention and from poison control centres, as well as a few studies and detailed case reports of medication errors, various types of error associated with insulin use have been identified, especially in the hospital setting. Generally, patients know more about the practicalities of their insulin treatment than healthcare professionals with intermittent involvement. Medication errors involving insulin can occur at each step of the medication-use process: prescribing, data entry, preparation, dispensing and administration. When prescribing insulin, wrong-dose errors have been caused by the use of abbreviations, especially "U" instead of the word "units" (often resulting in a 10-fold overdose because the "U" is read as a zero), or by failing to write the drug's name correctly or in full. In electronic prescribing, the sheer number of insulin products is a source of confusion and, ultimately, wrong-dose errors, and often overdose. Prescribing, dispensing or administration software is rarely compatible with insulin prescriptions in which the dose is adjusted on the basis of the patient's subsequent capillary blood glucose readings, and can therefore generate errors. When preparing and dispensing insulin, a tuberculin syringe is sometimes used instead of an insulin syringe, leading to overdose. Other errors arise from confusion created by similar packaging, between different insulin products or between insulin and other

  10. Systematic Characterization of High Mass Accuracy Influence on False Discovery and Probability Scoring in Peptide Mass Fingerprinting

    PubMed Central

    Dodds, Eric D.; Clowers, Brian H.; Hagerman, Paul J.; Lebrilla, Carlito B.

    2009-01-01

    While the bearing of mass measurement error upon protein identification is sometimes underestimated, uncertainty in observed peptide masses unavoidably translates to ambiguity in subsequent protein identifications. While ongoing instrumental advances continue to make high accuracy mass spectrometry (MS) increasingly accessible, many proteomics experiments are still conducted with rather large mass error tolerances. Additionally, the ranking schemes of most protein identification algorithms do not include a meaningful incorporation of mass measurement error. This report provides a critical evaluation of mass error tolerance as it pertains to false positive peptide and protein associations resulting from peptide mass fingerprint (PMF) database searching. High accuracy, high resolution PMFs of several model proteins were obtained using matrix-assisted laser desorption/ionization Fourier transform ion cyclotron resonance mass spectrometry (MALDI-FTICR-MS). Varying levels of mass accuracy were simulated by systematically modulating the mass error tolerance of the PMF query and monitoring the effect on figures of merit indicating the PMF quality. Importantly, the benefits of decreased mass error tolerance are not manifest in Mowse scores when operating at tolerances in the low parts per million range, but become apparent with the consideration of additional metrics that are often overlooked. Furthermore, the outcomes of these experiments support the concept that false discovery is closely tied to mass measurement error in PMF analysis. Clear establishment of this relation demonstrates the need for mass error aware protein identification routines and argues for a more prominent contribution of high accuracy mass measurement to proteomic science. PMID:17980142

  11. Study of coordinate measuring machines synthetic dynamic error under different positions and speeds based on dual linear returns

    NASA Astrophysics Data System (ADS)

    Ma, Xiushui; Fei, Yetai; Wang, Hongtao; Ying, Zhongyang; Li, Guang

    2006-11-01

    Modern manufacturing increasingly places a high requirement on the speed and accuracy of Coordinate Measuring Machines (CMMs). Measuring speed has become one of the key factors in evaluating the performance of CMMs. In high speed measuring, dynamic error will have a greater influence on accuracy. This paper tests the dynamic error of CMM's measuring system under different measuring positions and speeds using the dual frequency laser interferometer. Based on measured data, the modeling of synthetic dynamic errors is set up adopting the dual linear returns method. Comparing with the measured data, the relative error of modeling is between 15% to 20%, the returns equation is prominent at α=0.01 level, verified by "F". Based on the modeling of synthetic dynamic errors under different measuring positions and speeds, the measuring system dynamic error of CMMs is corrected and reduced.

  12. Variations on a theme: songbirds, variability, and sensorimotor error correction

    PubMed Central

    Kuebrich, Benjamin; Sober, Samuel

    2014-01-01

    Songbirds provide a powerful animal model for investigating how the brain uses sensory feedback to correct behavioral errors. Here, we review a recent study in which we used online manipulations of auditory feedback to quantify the relationship between sensory error size, motor variability, and vocal plasticity. We found that although inducing small auditory errors evoked relatively large compensatory changes in behavior, as error size increased the magnitude of error correction declined. Furthermore, when we induced large errors such that auditory signals no longer overlapped with the baseline distribution of feedback, the magnitude of error correction approached zero. This pattern suggests a simple and robust strategy for the brain to maintain the accuracy of learned behaviors by evaluating sensory signals relative to the previously experienced distribution of feedback. Drawing from recent studies of auditory neurophysiology and song discrimination, we then speculate as to the mechanistic underpinnings of the results obtained in our behavioral experiments. Finally, we review how our own and other studies exploit the strengths of the songbird system, both in the specific context of vocal systems and more generally as a model of the neural control of complex behavior. PMID:25305664

  13. Variations on a theme: Songbirds, variability, and sensorimotor error correction.

    PubMed

    Kuebrich, B D; Sober, S J

    2015-06-18

    Songbirds provide a powerful animal model for investigating how the brain uses sensory feedback to correct behavioral errors. Here, we review a recent study in which we used online manipulations of auditory feedback to quantify the relationship between sensory error size, motor variability, and vocal plasticity. We found that although inducing small auditory errors evoked relatively large compensatory changes in behavior, as error size increased the magnitude of error correction declined. Furthermore, when we induced large errors such that auditory signals no longer overlapped with the baseline distribution of feedback, the magnitude of error correction approached zero. This pattern suggests a simple and robust strategy for the brain to maintain the accuracy of learned behaviors by evaluating sensory signals relative to the previously experienced distribution of feedback. Drawing from recent studies of auditory neurophysiology and song discrimination, we then speculate as to the mechanistic underpinnings of the results obtained in our behavioral experiments. Finally, we review how our own and other studies exploit the strengths of the songbird system, both in the specific context of vocal systems and more generally as a model of the neural control of complex behavior. PMID:25305664

  14. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  15. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  16. ERP evidence of adaptive changes in error processing and attentional control during rhythm synchronization learning.

    PubMed

    Padrão, Gonçalo; Penhune, Virginia; de Diego-Balaguer, Ruth; Marco-Pallares, Josep; Rodriguez-Fornells, Antoni

    2014-10-15

    The ability to detect and use information from errors is essential during the acquisition of new skills. There is now a wealth of evidence about the brain mechanisms involved in error processing. However, the extent to which those mechanisms are engaged during the acquisition of new motor skills remains elusive. Here we examined rhythm synchronization learning across 12 blocks of practice in musically naïve individuals and tracked changes in ERP signals associated with error-monitoring and error-awareness across distinct learning stages. Synchronization performance improved with practice, and performance improvements were accompanied by dynamic changes in ERP components related to error-monitoring and error-awareness. Early in learning, when performance was poor and the internal representations of the rhythms were weaker we observed a larger error-related negativity (ERN) following errors compared to later learning. The larger ERN during early learning likely results from greater conflict between competing motor responses, leading to greater engagement of medial-frontal conflict monitoring processes and attentional control. Later in learning, when performance had improved, we observed a smaller ERN accompanied by an enhancement of a centroparietal positive component resembling the P3. This centroparietal positive component was predictive of participant's performance accuracy, suggesting a relation between error saliency, error awareness and the consolidation of internal templates of the practiced rhythms. Moreover, we showed that during rhythm learning errors led to larger auditory evoked responses related to attention orientation which were triggered automatically and which were independent of the learning stage. The present study provides crucial new information about how the electrophysiological signatures related to error-monitoring and error-awareness change during the acquisition of new skills, extending previous work on error processing and cognitive

  17. Quality assurance of MLC leaf position accuracy and relative dose effect at the MLC abutment region using an electronic portal imaging device

    PubMed Central

    Sumida, Iori; Yamaguchi, Hajime; Kizaki, Hisao; Koizumi, Masahiko; Ogata, Toshiyuki; Takahashi, Yutaka; Yoshioka, Yasuo

    2012-01-01

    We investigated an electronic portal image device (EPID)-based method to see whether it provides effective and accurate relative dose measurement at abutment leaves in terms of positional errors of the multi-leaf collimator (MLC) leaf position. A Siemens ONCOR machine was used. For the garden fence test, a rectangular field (0.2 × 20 cm) was sequentially irradiated 11 times at 2-cm intervals. Deviations from planned leaf positions were calculated. For the nongap test, relative doses at the MLC abutment region were evaluated by sequential irradiation of a rectangular field (2 × 20 cm) 10 times with a MLC separation of 2 cm without a leaf gap. The integral signal in a region of interest was set to position A (between leaves) and B (neighbor of A). A pixel value at position B was used as background and the pixel ratio (A/B × 100) was calculated. Both tests were performed at four gantry angles (0, 90, 180 and 270°) four times over 1 month. For the nongap test the difference in pixel ratio between the first and last period was calculated. Regarding results, average deviations from planned positions with the garden fence test were within 0.5 mm at all gantry angles, and at gantry angles of 90 and 270° tended to decrease gradually over the month. For the nongap test, pixel ratio tended to increase gradually in all leaves, leading to a decrease in relative doses at abutment regions. This phenomenon was affected by both gravity arising from the gantry angle, and the hardware-associated contraction of field size with this type of machine. PMID:22843372

  18. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  19. High Accuracy of Common HIV-Related Oral Disease Diagnoses by Non-Oral Health Specialists in the AIDS Clinical Trial Group

    PubMed Central

    Shiboski, Caroline H.; Chen, Huichao; Secours, Rode; Lee, Anthony; Webster-Cyriaque, Jennifer; Ghannoum, Mahmoud; Evans, Scott; Bernard, Daphné; Reznik, David; Dittmer, Dirk P.; Hosey, Lara; Sévère, Patrice; Aberg, Judith A.

    2015-01-01

    Objective Many studies include oral HIV-related endpoints that may be diagnosed by non-oral-health specialists (non-OHS) like nurses or physicians. Our objective was to assess the accuracy of clinical diagnoses of HIV-related oral lesions made by non-OHS compared to diagnoses made by OHS. Methods A5254, a cross-sectional study conducted by the Oral HIV/AIDS Research Alliance within the AIDS Clinical Trial Group, enrolled HIV-1-infected adults participants from six clinical trial units (CTU) in the US (San Francisco, New York, Chapel Hill, Cleveland, Atlanta) and Haiti. CTU examiners (non-OHS) received standardized training on how to perform an oral examination and make clinical diagnoses of specific oral disease endpoints. Diagnoses by calibrated non-OHS were compared to those made by calibrated OHS, and sensitivity and specificity computed. Results Among 324 participants, the majority were black (73%), men (66%), and the median CD4+ cell count 138 cells/mm3. The overall frequency of oral mucosal disease diagnosed by OHS was 43% in US sites, and 90% in Haiti. Oral candidiasis (OC) was detected in 153 (47%) by OHS, with erythematous candidiasis (EC) the most common type (39%) followed by pseudomembranous candidiasis (PC; 26%). The highest prevalence of OC (79%) was among participants in Haiti, and among those with CD4+ cell count ≤ 200 cells/mm3 and HIV-1 RNA > 1000 copies/mL (71%). The sensitivity and specificity of OC diagnoses by non-OHS were 90% and 92% (for EC: 81% and 94%; PC: 82% and 95%). Sensitivity and specificity were also high for KS (87% and 94%, respectively), but sensitivity was < 60% for HL and oral warts in all sites combined. The Candida culture confirmation of OC clinical diagnoses (as defined by ≥ 1 colony forming unit per mL of oral/throat rinse) was ≥ 93% for both PC and EC. Conclusion Trained non-OHS showed high accuracy of clinical diagnoses of OC in comparison with OHS, suggesting their usefulness in studies in resource-poor settings

  20. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  1. Reliability and accuracy of resident evaluations of surgical faculty.

    PubMed

    Risucci, D A; Lutsky, L; Rosati, R J; Tortolani, A J

    1992-09-01

    This study examines the reliability and accuracy of ratings by general surgery residents of surgical faculty. Twenty-three of 33 residents anonymously and voluntarily evaluated 62 surgeons in June, 1988; 24 of 28 residents evaluated 64 surgeons in June, 1989. Each resident rated each surgeon on a 5-point scale for each of 10 areas of performance: technical ability, basic science knowledge, clinical knowledge, judgment, peer relations, patient relations, reliability, industry, personal appearance, and reaction to pressure. Reliability analyses evaluated internal consistency and interrater correlation. Accuracy analyses evaluated halo error, leniency/severity, central tendency, and range restriction. Ratings had high internal consistency (coefficient alpha = 0.97). Interrater correlations were moderately high (average Pearson correlation = 0.63 among raters). Ratings were generally accurate, with halo error most prevalent and some evidence of leniency. Ratings by chief residents had the least halo. Results were generally replicable across the two academic years. We conclude that anonymous ratings of surgical faculty by groups of residents can provide a reliable and accurate evaluation method, ratings by chief residents are most accurate, and halo error may pose the greatest threat to accuracy, pointing to the need for greater definition of evaluation items and scale points. PMID:10121283

  2. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  3. SU-E-T-599: The Variation of Hounsfield Unit and Relative Electron Density Determination as a Function of KVp and Its Effect On Dose Calculation Accuracy

    SciTech Connect

    Ohl, A; Boer, S De

    2014-06-01

    Purpose: To investigate the differences in relative electron density for different energy (kVp) settings and the effect that these differences have on dose calculations. Methods: A Nuclear Associates 76-430 Mini CT QC Phantom with materials of known relative electron densities was imaged by one multi-slice (16) and one single-slice computed tomography (CT) scanner. The Hounsfield unit (HU) was recorded for each material with energies ranging from 80 to 140 kVp and a representative relative electron density (RED) curve was created. A 5 cm thick inhomogeneity was created in the treatment planning system (TPS) image at a depth of 5 cm. The inhomogeneity was assigned HU for various materials for each kVp calibration curve. The dose was then calculated with the analytical anisotropic algorithm (AAA) at points within and below the inhomogeneity and compared using the 80 kVp beam as a baseline. Results: The differences in RED values as a function of kVp showed the largest variations of 580 and 547 HU for the Aluminum and Bone materials; the smallest differences of 0.6 and 3.0 HU were observed for the air and lung inhomogeneities. The corresponding dose calculations for the different RED values assigned to the 5 cm thick slab revealed the largest differences inside the aluminum and bone inhomogeneities of 2.2 to 6.4% and 4.3 to 7.0% respectively. The dose differences beyond these two inhomogeneities were between 0.4 to 1.6% for aluminum and 1.9 to 2.2 % for bone. For materials with lower HU the calculated dose differences were less than 1.0%. Conclusion: For high CT number materials the dose differences in the phantom calculation as high as 7.0% are significant. This result may indicate that implementing energy specific RED curves can increase dose calculation accuracy.

  4. Spaceborne lidar measurement accuracy - Simulation of aerosol, cloud, molecular density, and temperature retrievals

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Browell, E. V.

    1982-01-01

    In connection with studies concerning the use of an orbiting optical radar (lidar) to conduct aerosol and cloud measurements, attention has been given to the accuracy with which lidar return signals could be measured. However, signal-measurement error is not the only source of error which can affect the accuracy of the derived information. Other error sources are the assumed molecular-density and atmospheric-transmission profiles, and the lidar calibration factor (which relates signal to backscatter coefficient). The present investigation has the objective to account for the effects of all these errors sources for several realistic combinations of lidar parameters, model atmospheres, and background lighting conditions. In addition, a procedure is tested and developed for measuring density and temperature profiles with the lidar, and for using the lidar-derived density profiles to improve aerosol retrievals.

  5. Stitching accuracy measurement system for EB direct writing and electron-beam projection lithography (EPL)

    NASA Astrophysics Data System (ADS)

    Tamura, Takao; Ema, Takahiro; Nozue, Hiroshi; Sugahara, Tamoya; Sugano, Akio; Nitta, Jun

    2001-08-01

    We have developed a stitching accuracy measurement system for electron beam (EB) direct writing and electron beam projection lithography (EPL). This system calculates the amount of a stitching error between two EB shots from SEM images. It extracts a representative edge line of each pattern from the graphical format files (BMP, JPEG etc.) of SEM images and calculates a distance between each edge line as a stitching error. For obtaining a higher stitching accuracy of EB direct writing or EPL machines, it can analyze the relation of amounts and direction of a stitching error with a field size or a field position of these machines. We could successfully measure about 2.0 nm as a stitching error value in 0.1 micrometers L/S resist patterns on a bare-Si substrate and obtain 1.2 nm (3(sigma) ) as the measurement repeatability. It took 2.5 sec. for this system to measure one stitching region.

  6. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  7. Sensitivity of grass and alfalfa reference evapotranspiration to weather station sensor accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1991 to 2008 from an autom...

  8. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  9. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    PubMed

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  10. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data

    PubMed Central

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L. J.; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two ‘one-process’ models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a ‘two-process’ model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  11. The relative accuracy of standard estimators for macrofaunal abundance and species richness derived from selected intertidal transect designs used to sample exposed sandy beaches

    NASA Astrophysics Data System (ADS)

    Schoeman, transect designs used to sample exposed sandy beaches D. S.; Wheeler, M.; Wait, M.

    2003-10-01

    In order to ensure that patterns detected in field samples reflect real ecological processes rather than methodological idiosyncrasies, it is important that researchers attempt to understand the consequences of the sampling and analytical designs that they select. This is especially true for sandy beach ecology, which has lagged somewhat behind ecological studies of other intertidal habitats. This paper investigates the performance of routine estimators of macrofaunal abundance and species richness, which are variables that have been widely used to infer predictable patterns of biodiversity across a gradient of beach types. To do this, a total of six shore-normal strip transects were sampled on three exposed, oceanic sandy beaches in the Eastern Cape, South Africa. These transects comprised contiguous quadrats arranged linearly between the spring high and low water marks. Using simple Monte Carlo simulation techniques, data collected from the strip transects were used to assess the accuracy of parameter estimates from different sampling strategies relative to their true values (macrofaunal abundance ranged 595-1369 individuals transect -1; species richness ranged 12-21 species transect -1). Results indicated that estimates from the various transect methods performed in a similar manner both within beaches and among beaches. Estimates for macrofaunal abundance tended to be negatively biased, especially at levels of sampling effort most commonly reported in the literature, and accuracy decreased with decreasing sampling effort. By the same token, estimates for species richness were always negatively biased and were also characterised by low precision. Furthermore, triplicate transects comprising a sampled area in the region of 4 m 2 (as has been previously recommended) are expected to miss more than 30% of the species that occur on the transect. Surprisingly, for both macrofaunal abundance and species richness, estimates based on data from transects sampling quadrats

  12. Analysis of Solar Two Heliostat Tracking Error Sources

    SciTech Connect

    Jones, S.A.; Stone, K.W.

    1999-01-28

    This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.

  13. Correction method for the error of diamond tool's radius in ultra-precision cutting

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  14. Measurement error revisited

    NASA Astrophysics Data System (ADS)

    Henderson, Robert K.

    1999-12-01

    It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.

  15. Gender Influences on Brain Responses to Errors and Post-Error Adjustments

    PubMed Central

    Fischer, Adrian G.; Danielmeier, Claudia; Villringer, Arno; Klein, Tilmann A.; Ullsperger, Markus

    2016-01-01

    Sexual dimorphisms have been observed in many species, including humans, and extend to the prevalence and presentation of important mental disorders associated with performance monitoring malfunctions. However, precisely which underlying differences between genders contribute to the alterations observed in psychiatric diseases is unknown. Here, we compare behavioural and neural correlates of cognitive control functions in 438 female and 436 male participants performing a flanker task while EEG was recorded. We found that males showed stronger performance-monitoring-related EEG amplitude modulations which were employed to predict subjects’ genders with ~72% accuracy. Females showed more post-error slowing, but both samples did not differ in regard to response-conflict processing and coupling between the error-related negativity (ERN) and consecutive behavioural slowing. Furthermore, we found that the ERN predicted consecutive behavioural slowing within subjects, whereas its overall amplitude did not correlate with post-error slowing across participants. These findings elucidate specific gender differences in essential neurocognitive functions with implications for clinical studies. They highlight that within- and between-subject associations for brain potentials cannot be interpreted in the same way. Specifically, despite higher general amplitudes in males, it appears that the dynamics of coupling between ERN and post-error slowing between men and women is comparable. PMID:27075509

  16. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  17. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to

  18. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R.E., Jr.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  19. Stereotyping ricochet: complex effects of racial distinctiveness on identification accuracy.

    PubMed

    Kleider, H M; Goldinger, S D

    2001-12-01

    Stuidies show that distinctive (e.g., attractive) people are better remembered than typical people (B. L. Cutler & S. D. Penrod, 1995). We investigated the effect of a Black person's presence on recognition accuracy for surrounding White individuals. Regarding eyewitness accuracy for an event, we expected more errors for White targets accompanied by Black confederates (experimental condition) than by White confederates (control). A staged accident was witnessed by participants, followed by a lineup. In 3 experiments, identification accuracy decreased in the experimental conditions, relative to control. Further data suggested that attention focused on the Black confederate reduced memory for the other confederates at the event. This pattern did not generalize to a condition substituting garish hair color for race, suggesting that racial distinctiveness, rather than general physical distinctiveness, contributed to the prior results. PMID:11771637

  20. Feasibility and accuracy of relative electron density determined by virtual monochromatic CT value subtraction at two different energies using the gemstone spectral imaging

    PubMed Central

    2013-01-01

    Background Recent work by Saito (2012) has demonstrated a simple conversion from energy-subtracted computed tomography (CT) values (ΔHU) obtained using dual-energy CT to relative electron density (RED) via a single linear relationship. The purpose of this study was to investigate the feasibility of this method to obtain RED from virtual monochromatic CT images obtained by the gemstone spectral imaging (GSI) mode with fast-kVp switching. Methods A tissue characterization phantom with 13 inserts made of different materials was scanned using the GSI mode on a Discovery CT750 HD. Four sets of virtual monochromatic CT images (60, 77, 100 and 140 keV) were obtained from a single GSI acquisition. When we define Δ HU in terms of the weighting factor for the subtraction α, Δ HU ≡ (1 + α)H - αL (H and L represent the CT values for high and low energy respectively), the relationship between Δ HU and RED is approximated as a linear function, a × Δ HU/1000 + b (a, b = unity). We evaluated the agreement between the determined and nominal RED. We also have investigated reproducibility over short and long time periods. Results For the 13 insert materials, the RED determined by monochromatic CT images agreed with the nominal values within 1.1% and the coefficient of determination for this calculation formula was greater than 0.999. The observed reproducibility (1 standard deviation) of calculation error was within 0.5% for all materials. Conclusions These findings indicate that virtual monochromatic CT scans at two different energies using GSI mode can provide an accurate method for estimating RED. PMID:23570343

  1. Academic Freedom, Promotion, Reappointment, Tenure and the Administrative Use of Student Evaluation of Faculty (SEF): (Part III) Analysis and Implications of Views from the Court in Relation to Accuracy and Psychometric Validity.

    ERIC Educational Resources Information Center

    Haskell, Robert E.

    1997-01-01

    Reviews legal rulings related to student evaluation of faculty (SEF), their implications, and assumptions with regard to accuracy and psychometric validity when SEF is integral to the denial of academic freedom, tenure, promotion, and reappointment. The legal principles of Disparate Treatment and Disparate Impact are considered in relation to SEF.…

  2. How the brain prevents a second error in a perceptual decision-making task

    PubMed Central

    Perri, Rinaldo Livio; Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2016-01-01

    In cognitive tasks, error commission is usually followed by a performance characterized by post-error slowing (PES) and post-error improvement of accuracy (PIA). Three theoretical accounts were hypothesized to support these post-error adjustments: the cognitive, the inhibitory, and the orienting account. The aim of the present ERP study was to investigate the neural processes associated with the second error prevention. To this aim, we focused on the preparatory brain activities in a large sample of subjects performing a Go/No-go task. The main results were the enhancement of the prefrontal negativity (pN) component -especially on the right hemisphere- and the reduction of the Bereitschaftspotential (BP) -especially on the left hemisphere- in the post-error trials. The ERP data suggested an increased top-down and inhibitory control, such as the reduced excitability of the premotor areas in the preparation of the trials following error commission. The results were discussed in light of the three theoretical accounts of the post-error adjustments. Additional control analyses supported the view that the adjustments-oriented components (the post-error pN and BP) are separated by the error-related potentials (Ne and Pe), even if all these activities represent a cascade of processes triggered by error-commission. PMID:27534593

  3. How the brain prevents a second error in a perceptual decision-making task.

    PubMed

    Perri, Rinaldo Livio; Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2016-01-01

    In cognitive tasks, error commission is usually followed by a performance characterized by post-error slowing (PES) and post-error improvement of accuracy (PIA). Three theoretical accounts were hypothesized to support these post-error adjustments: the cognitive, the inhibitory, and the orienting account. The aim of the present ERP study was to investigate the neural processes associated with the second error prevention. To this aim, we focused on the preparatory brain activities in a large sample of subjects performing a Go/No-go task. The main results were the enhancement of the prefrontal negativity (pN) component -especially on the right hemisphere- and the reduction of the Bereitschaftspotential (BP) -especially on the left hemisphere- in the post-error trials. The ERP data suggested an increased top-down and inhibitory control, such as the reduced excitability of the premotor areas in the preparation of the trials following error commission. The results were discussed in light of the three theoretical accounts of the post-error adjustments. Additional control analyses supported the view that the adjustments-oriented components (the post-error pN and BP) are separated by the error-related potentials (Ne and Pe), even if all these activities represent a cascade of processes triggered by error-commission. PMID:27534593

  4. Development and evaluation of a Kalman-filter algorithm for terminal area navigation using sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Kanning, G.; Cicolani, L. S.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.

  5. Accuracy of prediction of infarct-related arrhythmic circuits from image-based models reconstructed from low and high resolution MRI.

    PubMed

    Deng, Dongdong; Arevalo, Hermenegild; Pashakhanloo, Farhad; Prakosa, Adityo; Ashikaga, Hiroshi; McVeigh, Elliot; Halperin, Henry; Trayanova, Natalia

    2015-01-01

    Identification of optimal ablation sites in hearts with infarct-related ventricular tachycardia (VT) remains difficult to achieve with the current catheter-based mapping techniques. Limitations arise from the ambiguities in determining the reentrant pathways location(s). The goal of this study was to develop experimentally validated, individualized computer models of infarcted swine hearts, reconstructed from high-resolution ex-vivo MRI and to examine the accuracy of the reentrant circuit location prediction when models of the same hearts are instead reconstructed from low clinical-resolution MRI scans. To achieve this goal, we utilized retrospective data obtained from four pigs ~10 weeks post infarction that underwent VT induction via programmed stimulation and epicardial activation mapping via a multielectrode epicardial sock. After the experiment, high-resolution ex-vivo MRI with late gadolinium enhancement was acquired. The Hi-res images were downsampled into two lower resolutions (Med-res and Low-res) in order to replicate image quality obtainable in the clinic. The images were segmented and models were reconstructed from the three image stacks for each pig heart. VT induction similar to what was performed in the experiment was simulated. Results of the reconstructions showed that the geometry of the ventricles including the infarct could be accurately obtained from Med-res and Low-res images. Simulation results demonstrated that induced VTs in the Med-res and Low-res models were located close to those in Hi-res models. Importantly, all models, regardless of image resolution, accurately predicted the VT morphology and circuit location induced in the experiment. These results demonstrate that MRI-based computer models of hearts with ischemic cardiomyopathy could provide a unique opportunity to predict and analyze VT resulting for from specific infarct architecture, and thus may assist in clinical decisions to identify and ablate the reentrant circuit(s). PMID

  6. Relationships among balance, visual search, and lacrosse-shot accuracy.

    PubMed

    Marsh, Darrin W; Richard, Leon A; Verre, Arlene B; Myers, Jay

    2010-06-01

    The purpose of this study was to examine variables that may contribute to shot accuracy in women's college lacrosse. A convenience sample of 15 healthy women's National Collegiate Athletic Association Division III College lacrosse players aged 18-23 (mean+/-SD, 20.27+/-1.67) participated in the study. Four experimental variables were examined: balance, visual search, hand grip strength, and shoulder joint position sense. Balance was measured by the Biodex Stability System (BSS), and visual search was measured by the Trail-Making Test Part A (TMTA) and Trail-Making Test Part B (TMTB). Hand-grip strength was measured by a standard hand dynamometer, and shoulder joint position sense was measured using a modified inclinometer. All measures were taken in an indoor setting. These experimental variables were then compared with lacrosse-shot error that was measured indoors using a high-speed video camera recorder and a specialized L-shaped apparatus. A Stalker radar gun measured lacrosse-shot velocity. The mean lacrosse-shot error was 15.17 cm with a mean lacrosse-shot velocity of 17.14 m.s (38.35 mph). Lower scores on the BSS level 8 eyes open (BSS L8 E/O) test and TMTB were positively related to less lacrosse-shot error (r=0.760, p=0.011) and (r=0.519, p=0.048), respectively. Relations were not significant between lacrosse-shot error and grip strength (r=0.191, p = 0.496), lacrosse-shot error and BSS level 8 eyes closed (BSS L8 E/C) (r=0.501, p=0.102), lacrosse-shot error and BSS level 4 eyes open (BSS L4 E/O) (r=0.313, p=0.378), lacrosse-shot error and BSS level 4 eyes closed (BSS L4 E/C) (r=-0.029, p=0.936) lacrosse-shot error and shoulder joint position sense (r=-0.509, p=0.055) and between lacrosse-shot error and TMTA (r=0.375, p=0.168). The results reveal that greater levels of shot accuracy may be related to greater levels of visual search and balance ability in women college lacrosse athletes. PMID:20508452

  7. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  8. Design and accuracy analysis of a metamorphic CNC flame cutting machine for ship manufacturing

    NASA Astrophysics Data System (ADS)

    Hu, Shenghai; Zhang, Manhui; Zhang, Baoping; Chen, Xi; Yu, Wei

    2016-05-01

    The current research of processing large size fabrication holes on complex spatial curved surface mainly focuses on the CNC flame cutting machines design for ship hull of ship manufacturing. However, the existing machines cannot meet the continuous cutting requirements with variable pass conditions through their fixed configuration, and cannot realize high-precision processing as the accuracy theory is not studied adequately. This paper deals with structure design and accuracy prediction technology of novel machine tools for solving the problem of continuous and high-precision cutting. The needed variable trajectory and variable pose kinematic characteristics of non-contact cutting tool are figured out and a metamorphic CNC flame cutting machine designed through metamorphic principle is presented. To analyze kinematic accuracy of the machine, models of joint clearances, manufacturing tolerances and errors in the input variables and error models considering the combined effects are derived based on screw theory after establishing ideal kinematic models. Numerical simulations, processing experiment and trajectory tracking experiment are conducted relative to an eccentric hole with bevels on cylindrical surface respectively. The results of cutting pass contour and kinematic error interval which the position error is from-0.975 mm to +0.628 mm and orientation error is from-0.01 rad to +0.01 rad indicate that the developed machine can complete cutting process continuously and effectively, and the established kinematic error models are effective although the interval is within a `large' range. It also shows the matching property between metamorphic principle and variable working tasks, and the mapping correlation between original designing parameters and kinematic errors of machines. This research develops a metamorphic CNC flame cutting machine and establishes kinematic error models for accuracy analysis of machine tools.

  9. Collective animal decisions: preference conflict and decision accuracy

    PubMed Central

    Conradt, Larissa

    2013-01-01

    Social animals frequently share decisions that involve uncertainty and conflict. It has been suggested that conflict can enhance decision accuracy. In order to judge the practical relevance of such a suggestion, it is necessary to explore how general such findings are. Using a model, I examine whether conflicts between animals in a group with respect to preferences for avoiding false positives versus avoiding false negatives could, in principle, enhance the accuracy of collective decisions. I found that decision accuracy nearly always peaked when there was maximum conflict in groups in which individuals had different preferences. However, groups with no preferences were usually even more accurate. Furthermore, a relatively slight skew towards more animals with a preference for avoiding false negatives decreased the rate of expected false negatives versus false positives considerably (and vice versa), while resulting in only a small loss of decision accuracy. I conclude that in ecological situations in which decision accuracy is crucial for fitness and survival, animals cannot ‘afford’ preferences with respect to avoiding false positives versus false negatives. When decision accuracy is less crucial, animals might have such preferences. A slight skew in the number of animals with different preferences will result in the group avoiding that type of error more that the majority of group members prefers to avoid. The model also indicated that knowing the average success rate (‘base rate’) of a decision option can be very misleading, and that animals should ignore such base rates unless further information is available. PMID:24516716

  10. Non-Abelian quantum error correction

    NASA Astrophysics Data System (ADS)

    Feng, Weibo

    A quantum computer is a proposed device which would be capable of initializing, coherently manipulating, and measuring quantum states with sufficient accuracy to carry out new kinds of computations. In the standard scenario, a quantum computer is built out of quantum bits, or qubits, two-level quantum systems which replace the ordinary classical bits of a classical computer. Quantum computation is then carried out by applying quantum gates, the quantum equivalent of Boolean logic gates, to these qubits. The most fundamental barrier to building a quantum computer is the inevitable errors which occur when carrying out quantum gates and the loss of quantum coherence of the qubits due to their coupling to the environment (decoherence). Remarkably, it has been shown that in a quantum computer such errors and decoherence can be actively fought using what is known as quantum error correction. A closely related proposal for fighting errors and decoherence in a quantum computer is to build the computer out of so-called topologically ordered states of matter. These are states of matter which allow for the storage and manipulation of quantum states with a built in protection from error and decoherence. The excitations of these states are non-Abelian anyons, particle-like excitations which satisfy non-Abelian statistics, meaning that when two excitations are interchanged the result is not the usual +1 and -1 associated with identical Bosons or Fermions, but rather a unitary operation which acts on a multidimensional Hilbert space. It is therefore possible to envision computing with these anyons by braiding their world-lines in 2+1-dimensional spacetime. In this Dissertation we present explicit procedures for a scheme which lives at the intersection of these two approaches. In this scheme we envision a functioning ``conventional" quantum computer consisting of an array of qubits and the ability to carry out quantum gates on these qubits. We then give explicit quantum circuits

  11. Navigation Accuracy Guidelines for Orbital Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2003-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation's semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver we, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  12. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  13. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  14. Accuracy of microbial growth predictions with square root and polynomial models.

    PubMed

    Delignette-Muller, M L; Rosso, L; Flandrois, J P

    1995-10-01

    The results of growth predictions using square root and polynomial models published in 14 papers were studied. Errors on quantities of practical interest such as lag time, generation time or the time required to reach a given increase in number of cells, are analyzed. The distribution of these errors was examined with the perspective of the practical use of predictive models in food industry. Highly unsafe predictions and significant average errors were observed in some cases. A good knowledge of predictive models accuracy seems essential for their efficient and safe use, for example to predict the shelf life of a product. Yet, authors generally gave no pragmatic information on such things as the average relative error or the range of errors on predicted variables. Problems of robustness of models when tested in different conditions were noticed, which corroborates the necessity of a systematic validation of models on new data. PMID:8579985

  15. Assessing the accuracy of prediction algorithms for classification: an overview.

    PubMed

    Baldi, P; Brunak, S; Chauvin, Y; Andersen, C A; Nielsen, H

    2000-05-01

    We provide a unified overview of methods that currently are widely used to assess the accuracy of prediction algorithms, from raw percentages, quadratic error measures and other distances, and correlation coefficients, and to information theoretic measures such as relative entropy and mutual information. We briefly discuss the advantages and disadvantages of each approach. For classification tasks, we derive new learning algorithms for the design of prediction systems by directly optimising the correlation coefficient. We observe and prove several results relating sensitivity and specificity of optimal systems. While the principles are general, we illustrate the applicability on specific problems such as protein secondary structure and signal peptide prediction. PMID:10871264

  16. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  17. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  18. TU-F-17A-08: The Relative Accuracy of 4D Dose Accumulation for Lung Radiotherapy Using Rigid Dose Projection Versus Dose Recalculation On Every Breathing Phase

    SciTech Connect

    Lamb, J; Lee, C; Tee, S; Lee, P; Iwamoto, K; Low, D; Valdes, G; Robinson, C

    2014-06-15

    Purpose: To investigate the accuracy of 4D dose accumulation using projection of dose calculated on the end-exhalation, mid-ventilation, or average intensity breathing phase CT scan, versus dose accumulation performed using full Monte Carlo dose recalculation on every breathing phase. Methods: Radiotherapy plans were analyzed for 10 patients with stage I-II lung cancer planned using 4D-CT. SBRT plans were optimized using the dose calculated by a commercially-available Monte Carlo algorithm on the end-exhalation 4D-CT phase. 4D dose accumulations using deformable registration were performed with a commercially available tool that projected the planned dose onto every breathing phase without recalculation, as well as with a Monte Carlo recalculation of the dose on all breathing phases. The 3D planned dose (3D-EX), the 3D dose calculated on the average intensity image (3D-AVE), and the 4D accumulations of the dose calculated on the end-exhalation phase CT (4D-PR-EX), the mid-ventilation phase CT (4D-PR-MID), and the average intensity image (4D-PR-AVE), respectively, were compared against the accumulation of the Monte Carlo dose recalculated on every phase. Plan evaluation metrics relating to target volumes and critical structures relevant for lung SBRT were analyzed. Results: Plan evaluation metrics tabulated using 4D-PR-EX, 4D-PR-MID, and 4D-PR-AVE differed from those tabulated using Monte Carlo recalculation on every phase by an average of 0.14±0.70 Gy, - 0.11±0.51 Gy, and 0.00±0.62 Gy, respectively. Deviations of between 8 and 13 Gy were observed between the 4D-MC calculations and both 3D methods for the proximal bronchial trees of 3 patients. Conclusions: 4D dose accumulation using projection without re-calculation may be sufficiently accurate compared to 4D dose accumulated from Monte Carlo recalculation on every phase, depending on institutional protocols. Use of 4D dose accumulation should be considered when evaluating normal tissue complication

  19. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  20. Relative effects of whole-word and phonetic-prompt error correction on the acquisition and maintenance of sight words by students with developmental disabilities.

    PubMed Central

    Barbetta, P M; Heward, W L; Bradley, D M

    1993-01-01

    We used an alternating treatments design to compare the effects of two procedures for correcting student errors during sight word drills. Each of the 5 participating students with developmental disabilities was provided daily one-to-one instruction on individualized sets of 14 unknown words. Each week's new set of unknown words was divided randomly into two groups of equal size. Student errors during instruction were immediately followed by whole-word error correction (the teacher stated the complete word and the student repeated it) for one group of words and by phonetic-prompt error correction (the teacher provided phonetic prompts) for the other group of words. During instruction, all 5 students read correctly a higher percentage of whole-word corrected words than phonetic-prompt corrected words. Data from same-day tests (immediately following instruction) and next-day tests showed the students learned more words taught with whole-word error correction than they learned with phonetic-prompt error correction. PMID:8473263

  1. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  2. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  3. L2 Spelling Errors in Italian Children with Dyslexia.

    PubMed

    Palladino, Paola; Cismondo, Dhebora; Ferrari, Marcella; Ballagamba, Isabella; Cornoldi, Cesare

    2016-05-01

    The present study aimed to investigate L2 spelling skills in Italian children by administering an English word dictation task to 13 children with dyslexia (CD), 13 control children (comparable in age, gender, schooling and IQ) and a group of 10 children with an English learning difficulty, but no L1 learning disorder. Patterns of difficulties were examined for accuracy and type of errors, in spelling dictated short and long words (i.e. disyllables and three syllables). Notably, CD were poor in spelling English words. Furthermore, their errors were mainly related with phonological representation of words, as they made more 'phonologically' implausible errors than controls. In addition, CD errors were more frequent for short than long words. Conversely, the three groups did not differ in the number of plausible ('non-phonological') errors, that is, words that were incorrectly written, but whose reading could correspond to the dictated word via either Italian or English rules. Error analysis also showed syllable position differences in the spelling patterns of CD, children with and English learning difficulty and control children. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892314

  4. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  5. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  6. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  7. Improving Automatic English Writing Assessment Using Regression Trees and Error-Weighting

    NASA Astrophysics Data System (ADS)

    Lee, Kong-Joo; Kim, Jee-Eun

    The proposed automated scoring system for English writing tests provides an assessment result including a score and diagnostic feedback to test-takers without human's efforts. The system analyzes an input sentence and detects errors related to spelling, syntax and content similarity. The scoring model has adopted one of the statistical approaches, a regression tree. A scoring model in general calculates a score based on the count and the types of automatically detected errors. Accordingly, a system with higher accuracy in detecting errors raises the accuracy in scoring a test. The accuracy of the system, however, cannot be fully guaranteed for several reasons, such as parsing failure, incompleteness of knowledge bases, and ambiguous nature of natural language. In this paper, we introduce an error-weighting technique, which is similar to term-weighting widely used in information retrieval. The error-weighting technique is applied to judge reliability of the errors detected by the system. The score calculated with the technique is proven to be more accurate than the score without it.

  8. Digital phase-locked-loop speed sensor for accuracy improvement in analog speed controls. [feedback control and integrated circuits

    NASA Technical Reports Server (NTRS)

    Birchenough, A. G.

    1975-01-01

    A digital speed control that can be combined with a proportional analog controller is described. The stability and transient response of the analog controller were retained and combined with the long-term accuracy of a crystal-controlled integral controller. A relatively simple circuit was developed by using phase-locked-loop techniques and total error storage. The integral digital controller will maintain speed control accuracy equal to that of the crystal reference oscillator.

  9. Effects on rotation-angle conversion error from the pitch error in the rotor of a magnetoelectric converter

    SciTech Connect

    Ivanov, B.N.; Dubenets, A.L.

    1995-08-01

    The paper discusses the pitch error of the tooth rotor in a magnetoelectric angle convert or the error of measurement when the stator has one, two, or four readout one-tooth heads. The main reasons for error in converting the angle are deviations in the shape and size of the tooth elements in the rotor and stator, errors in their mutual disposition during displacement, and lack of coincidence between the geometrical axis of the input shaft and the axis of the rotor. The effects of the last factor have been considered, where suitable recommendations were made. In error tests were reported on tooth converters in which the rotor and stator had a large number of teeth arranged around the entire circle, while the parameters of the teeth and the gaps between them did not exceed the limits of deviation recommended. These converter provide high conversion accuracy, since they employ circular averaging of the error due to the deviations in the positions of the rotor teeth relative to the stator from the calculated values. However, such devices are complicated to make and often are not justified by the metrological and cost requirements. In most cases it is desirable to make the rotor as one of the tooth rings in a machine or instrument, while the stator consists of one, two, or several (usually four) readout one-tooth magnetoelectric heads.

  10. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: target confidence is superior to error-related potential score as an uncertain label

    NASA Astrophysics Data System (ADS)

    Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom

    2016-04-01

    Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA

  11. Unified Analysis for Antenna Pointing and Structural Errors. Part 1. Review

    NASA Technical Reports Server (NTRS)

    Abichandani, K.

    1983-01-01

    A necessary step in the design of a high accuracy microwave antenna system is to establish the signal error budget due to structural, pointing, and environmental parameters. A unified approach in performing error budget analysis as applicable to ground-based microwave antennas of different size and operating frequency is discussed. Major error sources contributing to the resultant deviation in antenna boresighting in pointing and tracking modes and the derivation of the governing equations are presented. Two computer programs (SAMCON and EBAP) were developed in-house, including the antenna servo-control program, as valuable tools in the error budget determination. A list of possible errors giving their relative contributions and levels is presented.

  12. Project of Neutron Beta-Decay A-Asymmetry Measurement With Relative Accuracy of (1–2)×10−3

    PubMed Central

    Serebrov, A.; Rudnev, Yu.; Murashkin, A.; Zherebtsov, O.; Kharitonov, A.; Korolev, V.; Morozov, T.; Fomin, A.; Pusenkov, V.; Schebetov, A.; Varlamov, V.

    2005-01-01

    We are going to use a polarized cold neutron beam and an axial magnetic field in the shape of a bottle formed by a superconducting magnetic system. Such a configuration of magnetic fields allows us to extract the decay electrons inside a well-defined solid angle with high accuracy. An electrostatic cylinder with a potential of 25 kV defines the detected region of neutron decays. The protons, which come from this region will be accelerated and registered by a proton detector. The use of coincidences between electron and proton signals will allow us to considerably suppress the background. The final accuracy of the A-asymmetry will be determined by the uncertainty of the neutron beam polarization measurement which is at the level of (1–2) × 10−3, as shown in previous studies. PMID:27308154

  13. An error analysis of the recovery capability of the relative sea-surface profile over the Puerto Rican trench from multi-station and ship tracking of GEOS-2

    NASA Technical Reports Server (NTRS)

    Stanley, H. R.; Martin, C. F.; Roy, N. A.; Vetter, J. R.

    1971-01-01

    Error analyses were performed to examine the height error in a relative sea-surface profile as determined by a combination of land-based multistation C-band radars and optical lasers and one ship-based radar tracking the GEOS 2 satellite. It was shown that two relative profiles can be obtained: one using available south-to-north passes of the satellite and one using available north-to-south type passes. An analysis of multi-station tracking capability determined that only Antigua and Grand Turk radars are required to provide satisfactory orbits for south-to-north type satellite passes, while a combination of Merritt Island, Bermuda, and Wallops radars provide secondary orbits for north-to-south passes. Analysis of ship tracking capabilities shows that high elevation single pass range-only solutions are necessary to give only moderate sensitivity to systematic error effects.

  14. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  15. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  16. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  17. Temperature variable optimization for precision machine tool thermal error compensation on optimal threshold

    NASA Astrophysics Data System (ADS)

    Zhang, Ting; Ye, Wenhua; Liang, Ruijun; Lou, Peihuang; Yang, Xiaolan

    2013-01-01

    Machine tool thermal error is an important reason for poor machining accuracy. Thermal error compensation is a primary technology in accuracy control. To build thermal error model, temperature variables are needed to be divided into several groups on an appropriate threshold. Currently, group threshold value is mainly determined by researchers experience. Few studies focus on group threshold in temperature variable grouping. Since the threshold is important in error compensation, this paper arms to find out an optimal threshold to realize temperature variable optimization in thermal error modeling. Firstly, correlation coefficient is used to express membership grade of temperature variables, and the theory of fuzzy transitive closure is applied to obtain relational matrix of temperature variables. Concepts as compact degree and separable degree are introduced. Then evaluation model of temperature variable clustering is built. The optimal threshold and the best temperature variable clustering can be obtained by setting the maximum value of evaluation model as the objective. Finally, correlation coefficients between temperature variables and thermal error are calculated in order to find out optimum temperature variables for thermal error modeling. An experiment is conducted on a precise horizontal machining center. In experiment, three displacement sensors are used to measure spindle thermal error and twenty-nine temperature sensors are utilized to detect the machining center temperature. Experimental result shows that the new method of temperature variable optimization on optimal threshold successfully worked out a best threshold value interval and chose seven temperature variables from twenty-nine temperature measuring points. The model residual of z direction is within 3 μm. Obviously, the proposed new variable optimization method has simple computing process and good modeling accuracy, which is quite fit for thermal error compensation.

  18. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  19. An acoustical assessment of pitch-matching accuracy in relation to speech frequency, speech frequency range, age and gender in preschool children

    NASA Astrophysics Data System (ADS)

    Trollinger, Valerie L.

    This study investigated the relationship between acoustical measurement of singing accuracy in relationship to speech fundamental frequency, speech fundamental frequency range, age and gender in preschool-aged children. Seventy subjects from Southeastern Pennsylvania; the San Francisco Bay Area, California; and Terre Haute, Indiana, participated in the study. Speech frequency was measured by having the subjects participate in spontaneous and guided speech activities with the researcher, with 18 diverse samples extracted from each subject's recording for acoustical analysis for fundamental frequency in Hz with the CSpeech computer program. The fundamental frequencies were averaged together to derive a mean speech frequency score for each subject. Speech range was calculated by subtracting the lowest fundamental frequency produced from the highest fundamental frequency produced, resulting in a speech range measured in increments of Hz. Singing accuracy was measured by having the subjects each echo-sing six randomized patterns using the pitches Middle C, D, E, F♯, G and A (440), using the solfege syllables of Do and Re, which were recorded by a 5-year-old female model. For each subject, 18 samples of singing were recorded. All samples were analyzed by the CSpeech for fundamental frequency. For each subject, deviation scores in Hz were derived by calculating the difference between what the model sang in Hz and what the subject sang in response in Hz. Individual scores for each child consisted of an overall mean total deviation frequency, mean frequency deviations for each pattern, and mean frequency deviation for each pitch. Pearson correlations, MANOVA and ANOVA analyses, Multiple Regressions and Discriminant Analysis revealed the following findings: (1) moderate but significant (p < .001) relationships emerged between mean speech frequency and the ability to sing the pitches E, F♯, G and A in the study; (2) mean speech frequency also emerged as the strongest

  20. Critical needs of fringe-order accuracies in two-color holographic interferometry

    NASA Technical Reports Server (NTRS)

    Vikram, C. S.; Witherow, W. K.

    1992-01-01

    Requirements for the fringe order accuracy in two-color holographic interferometry are discussed with reference to crystal growth. A simple test cell (rectangular parallelepiped) containing a fluid is considered. The temperature and concentration variations are related to the fringe orders from the two interference patterns, and the uncertainties in the fringe orders are related to errors in the temperature and concentration determination. The formulation developed here is applied to the particular case of an aqueous solution of triglycerine sulfate as an example.

  1. Error analysis and data reduction for interferometric surface measurements

    NASA Astrophysics Data System (ADS)

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  2. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    After committing an error, participants tend to perform more slowly. This phenomenon is called post-error slowing (PES). Although previous studies have explored the PES effect in the context of observed errors, the issue as to whether the slowing effect generalizes across tasksets remains unclear. Further, the generation mechanisms of PES following observed errors must be examined. To address the above issues, we employed an observation-execution task in three experiments. During each trial, participants were required to mentally observe the outcomes of their partners in the observation task and then to perform their own key-press according to the mapping rules in the execution task. In Experiment 1, the same tasksets were utilized in the observation task and the execution task, and three error rate conditions (20%, 50% and 80%) were established in the observation task. The results revealed that the PES effect after observed errors was obtained in all three error rate conditions, replicating and extending previous studies. In Experiment 2, distinct stimuli and response rules were utilized in the observation task and the execution task. The result pattern was the same as that in Experiment 1, suggesting that the PES effect after observed errors was a generic adjustment process. In Experiment 3, the response deadline was shortened in the execution task to rule out the ceiling effect, and two error rate conditions (50% and 80%) were established in the observation task. The PES effect after observed errors was still obtained in the 50% and 80% error rate conditions. However, the accuracy in the post-observed error trials was comparable to that in the post-observed correct trials, suggesting that the slowing effect and improved accuracy did not rely on the same underlying mechanism. Current findings indicate that the occurrence of PES after observed errors is not dependent on the probability of observed errors, consistent with the assumption of cognitive control account

  3. Evaluation of factors influencing accuracy of principal procedure coding based on ICD-9-CM: an Iranian study.

    PubMed

    Farzandipour, Mehrdad; Sheikhtaheri, Abbas

    2009-01-01

    To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. "Recodes" were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by chi(2) or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647

  4. Evaluation of Factors Influencing Accuracy of Principal Procedure Coding Based on ICD-9-CM: An Iranian Study

    PubMed Central

    Farzandipour, Mehrdad; Sheikhtaheri, Abbas

    2009-01-01

    To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647

  5. GOMOS data characterization and error estimation

    NASA Astrophysics Data System (ADS)

    Tamminen, J.; Kyrölä, E.; Sofieva, V. F.; Laine, M.; Bertaux, J.-L.; Hauchecorne, A.; Dalaudier, F.; Fussen, D.; Vanhellemont, F.; Fanton-D'Andon, O.; Barrot, G.; Mangin, A.; Guirlet, M.; Blanot, L.; Fehr, T.; Saavedra de Miguel, L.; Fraisse, R.

    2010-03-01

    The Global Ozone Monitoring by Occultation of Stars (GOMOS) instrument uses stellar occultation technique for monitoring ozone and other trace gases in the stratosphere and mesosphere. The self-calibrating measurement principle of GOMOS together with a relatively simple data retrieval where only minimal use of a priori data is required, provides excellent possibilities for long term monitoring of atmospheric composition. GOMOS uses about 180 brightest stars as the light source. Depending on the individual spectral characteristics of the stars, the signal-to-noise ratio of GOMOS is changing from star to star, resulting also varying accuracy to the retrieved profiles. We present the overview of the GOMOS data characterization and error estimation, including modeling errors, for ozone, NO2, NO3 and aerosol profiles. The retrieval error (precision) of the night time measurements in the stratosphere is typically 0.5-4% for ozone, about 10-20% for NO2, 20-40% for NO3 and 2-50% for aerosols. Mesospheric O3, up to 100 km, can be measured with 2-10% precision. The main sources of the modeling error are the incompletely corrected atmospheric turbulence causing scintillation, inaccurate aerosol modeling, uncertainties in cross sections of the trace gases and in the atmospheric temperature. The sampling resolution of GOMOS varies depending on the measurement geometry. In the data inversion a Tikhonov-type regularization with pre-defined target resolution requirement is applied leading to 2-3 km resolution for ozone and 4 km resolution for other trace gases.

  6. GOMOS data characterisation and error estimation

    NASA Astrophysics Data System (ADS)

    Tamminen, J.; Kyrölä, E.; Sofieva, V. F.; Laine, M.; Bertaux, J.-L.; Hauchecorne, A.; Dalaudier, F.; Fussen, D.; Vanhellemont, F.; Fanton-D'Andon, O.; Barrot, G.; Mangin, A.; Guirlet, M.; Blanot, L.; Fehr, T.; Saavedra de Miguel, L.; Fraisse, R.

    2010-10-01

    The Global Ozone Monitoring by Occultation of Stars (GOMOS) instrument uses stellar occultation technique for monitoring ozone, other trace gases and aerosols in the stratosphere and mesosphere. The self-calibrating measurement principle of GOMOS together with a relatively simple data retrieval where only minimal use of a priori data is required provides excellent possibilities for long-term monitoring of atmospheric composition. GOMOS uses about 180 of the brightest stars as its light source. Depending on the individual spectral characteristics of the stars, the signal-to-noise ratio of GOMOS varies from star to star, resulting also in varying accuracy of retrieved profiles. We present here an overview of the GOMOS data characterisation and error estimation, including modeling errors, for O3, NO2, NO3, and aerosol profiles. The retrieval error (precision) of night-time measurements in the stratosphere is typically 0.5-4% for ozone, about 10-20% for NO2, 20-40% for NO3 and 2-50% for aerosols. Mesospheric O3, up to 100 km, can be measured with 2-10% precision. The main sources of the modeling error are incompletely corrected scintillation, inaccurate aerosol modeling, uncertainties in cross sections of trace gases and in atmospheric temperature. The sampling resolution of GOMOS varies depending on the measurement geometry. In the data inversion a Tikhonov-type regularization with pre-defined target resolution requirement is applied leading to 2-3 km vertical resolution for ozone and 4 km resolution for other trace gases and aerosols.

  7. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore

  8. Uncertainty in 2D hydrodynamic models from errors in roughness parameterization based on aerial images

    NASA Astrophysics Data System (ADS)

    Straatsma, Menno; Huthoff, Fredrik

    2011-01-01

    In The Netherlands, 2D-hydrodynamic simulations are used to evaluate the effect of potential safety measures against river floods. In the investigated scenarios, the floodplains are completely inundated, thus requiring realistic representations of hydraulic roughness of floodplain vegetation. The current study aims at providing better insight into the uncertainty of flood water levels due to uncertain floodplain roughness parameterization. The study focuses on three key elements in the uncertainty of floodplain roughness: (1) classification error of the landcover map, (2), within class variation of vegetation structural characteristics, and (3) mapping scale. To assess the effect of the first error source, new realizations of ecotope maps were made based on the current floodplain ecotope map and an error matrix of the classification. For the second error source, field measurements of vegetation structure were used to obtain uncertainty ranges for each vegetation structural type. The scale error was investigated by reassigning roughness codes on a smaller spatial scale. It is shown that classification accuracy of 69% leads to an uncertainty range of predicted water levels in the order of decimeters. The other error sources are less relevant. The quantification of the uncertainty in water levels can help to make better decisions on suitable flood protection measures. Moreover, the relation between uncertain floodplain roughness and the error bands in water levels may serve as a guideline for the desired accuracy of floodplain characteristics in hydrodynamic models.

  9. Developmental Aspects of Error and High-Conflict-Related Brain Activity in Pediatric Obsessive-Compulsive Disorder: A FMRI Study with a Flanker Task before and after CBT

    ERIC Educational Resources Information Center

    Huyser, Chaim; Veltman, Dick J.; Wolters, Lidewij H.; de Haan, Else; Boer, Frits

    2011-01-01

    Background: Heightened error and conflict monitoring are considered central mechanisms in obsessive-compulsive disorder (OCD) and are associated with anterior cingulate cortex (ACC) function. Pediatric obsessive-compulsive patients provide an opportunity to investigate the development of this area and its associations with psychopathology.…

  10. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection

    ERIC Educational Resources Information Center

    Buschang, Rebecca Ellen

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  11. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection. CRESST Report 822

    ERIC Educational Resources Information Center

    Buschang, Rebecca E.

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  12. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  13. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  14. Comparative evaluation of ultrasound scanner accuracy in distance measurement

    NASA Astrophysics Data System (ADS)

    Branca, F. P.; Sciuto, S. A.; Scorza, A.

    2012-10-01

    The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.

  15. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  16. Measurement errors induced by axis tilt of biplates in dual-rotating compensator Mueller matrix ellipsometers

    NASA Astrophysics Data System (ADS)

    Gu, Honggang; Zhang, Chuanwei; Jiang, Hao; Chen, Xiuguo; Li, Weiqi; Liu, Shiyuan

    2015-06-01

    Dual-rotating compensator Mueller matrix ellipsometer (DRC-MME) has been designed and applied as a powerful tool for the characterization of thin films and nanostructures. The compensators are indispensable optical components and their performances affect the precision and accuracy of DRC-MME significantly. Biplates made of birefringent crystals are commonly used compensators in the DRC-MME, and their optical axes invariably have tilt errors due to imperfect fabrication and improper installation in practice. The axis tilt error between the rotation axis and the light beam will lead to a continuous vibration in the retardance of the rotating biplate, which further results in significant measurement errors in the Mueller matrix. In this paper, we propose a simple but valid formula for the retardance calculation under arbitrary tilt angle and azimuth angle to analyze the axis tilt errors in biplates. We further study the relations between the measurement errors in the Mueller matrix and the biplate axis tilt through simulations and experiments. We find that the axis tilt errors mainly affect the cross-talk from linear polarization to circular polarization and vice versa. In addition, the measurement errors in Mueller matrix increase acceleratively with the axis tilt errors in biplates, and the optimal retardance for reducing these errors is about 80°. This work can be expected to provide some guidences for the selection, installation and commissioning of the biplate compensator in DRC-MME design.

  17. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  18. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  19. Global accuracy estimates of point and mean undulation differences obtained from gravity disturbances, gravity anomalies and potential coefficients

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1979-01-01

    Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.

  20. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…