Sample records for errors results show

  1. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  2. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  3. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  4. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  5. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  6. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  8. [Allocation of attentional resource and monitoring processes under rapid serial visual presentation].

    PubMed

    Nishiura, K

    1998-08-01

    With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.

  9. Symmetry boost of the fidelity of Shor factoring

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2018-05-01

    In Shor's algorithm quantum subroutines occur with the structure F U F-1 , where F is a unitary transform and U is performing a quantum computation. Examples are quantum adders and subunits of quantum modulo adders. In this paper we show, both analytically and numerically, that if, in analogy to spin echoes, F and F-1 can be implemented symmetrically when executing Shor's algorithm on actual, imperfect quantum hardware, such that F and F-1 have the same hardware errors, a symmetry boost in the fidelity of the combined F U F-1 quantum operation results when compared to the case in which the errors in F and F-1 are independently random. Running the complete gate-by-gate implemented Shor algorithm, we show that the symmetry-induced fidelity boost can be as large as a factor 4. While most of our analytical and numerical results concern the case of over- and under-rotation of controlled rotation gates, in the numerically accessible case of Shor's algorithm with a small number of qubits, we show explicitly that the symmetry boost is robust with respect to more general types of errors. While, expectedly, additional error types reduce the symmetry boost, we show explicitly, by implementing general off-diagonal SU (N ) errors (N =2 ,4 ,8 ), that the boost factor scales like a Lorentzian in δ /σ , where σ and δ are the error strengths of the diagonal over- and underrotation errors and the off-diagonal SU (N ) errors, respectively. The Lorentzian shape also shows that, while the boost factor may become small with increasing δ , it declines slowly (essentially like a power law) and is never completely erased. We also investigate the effect of diagonal nonunitary errors, which, in analogy to unitary errors, reduce but never erase the symmetry boost. Going beyond the case of small quantum processors, we present analytical scaling results that show that the symmetry boost persists in the practically interesting case of a large number of qubits. We illustrate this result explicitly for the case of Shor factoring of the semiprime RSA-1024, where, analytically, focusing on over- and underrotation errors, we obtain a boost factor of about 10. In addition, we provide a proof of the fidelity product formula, including its range of applicability.

  10. Examination of efficacious, efficient, and socially valid error-correction procedures to teach sight words and prepositions to children with autism spectrum disorder.

    PubMed

    Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob

    2016-09-01

    Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.

  11. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  12. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  13. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    PubMed

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  14. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  16. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  17. Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties

    NASA Astrophysics Data System (ADS)

    Hua, W.

    2017-12-01

    Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.

  18. Learning a visuomotor rotation: simultaneous visual and proprioceptive information is crucial for visuomotor remapping.

    PubMed

    Shabbott, Britne A; Sainburg, Robert L

    2010-05-01

    Visuomotor adaptation is mediated by errors between intended and sensory-detected arm positions. However, it is not clear whether visual-based errors that are shown during the course of motion lead to qualitatively different or more efficient adaptation than errors shown after movement. For instance, continuous visual feedback mediates online error corrections, which may facilitate or inhibit the adaptation process. We addressed this question by manipulating the timing of visual error information and task instructions during a visuomotor adaptation task. Subjects were exposed to a visuomotor rotation, during which they received continuous visual feedback (CF) of hand position with instructions to correct or not correct online errors, or knowledge-of-results (KR), provided as a static hand-path at the end of each trial. Our results showed that all groups improved performance with practice, and that online error corrections were inconsequential to the adaptation process. However, in contrast to the CF groups, the KR group showed relatively small reductions in mean error with practice, increased inter-trial variability during rotation exposure, and more limited generalization across target distances and workspace. Further, although the KR group showed improved performance with practice, after-effects were minimal when the rotation was removed. These findings suggest that simultaneous visual and proprioceptive information is critical in altering neural representations of visuomotor maps, although delayed error information may elicit compensatory strategies to offset perturbations.

  19. Dissociable effects of surprising rewards on learning and memory.

    PubMed

    Rouhani, Nina; Norman, Kenneth A; Niv, Yael

    2018-03-19

    Reward-prediction errors track the extent to which rewards deviate from expectations, and aid in learning. How do such errors in prediction interact with memory for the rewarding episode? Existing findings point to both cooperative and competitive interactions between learning and memory mechanisms. Here, we investigated whether learning about rewards in a high-risk context, with frequent, large prediction errors, would give rise to higher fidelity memory traces for rewarding events than learning in a low-risk context. Experiment 1 showed that recognition was better for items associated with larger absolute prediction errors during reward learning. Larger prediction errors also led to higher rates of learning about rewards. Interestingly we did not find a relationship between learning rate for reward and recognition-memory accuracy for items, suggesting that these two effects of prediction errors were caused by separate underlying mechanisms. In Experiment 2, we replicated these results with a longer task that posed stronger memory demands and allowed for more learning. We also showed improved source and sequence memory for items within the high-risk context. In Experiment 3, we controlled for the difficulty of reward learning in the risk environments, again replicating the previous results. Moreover, this control revealed that the high-risk context enhanced item-recognition memory beyond the effect of prediction errors. In summary, our results show that prediction errors boost both episodic item memory and incremental reward learning, but the two effects are likely mediated by distinct underlying systems. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    ERIC Educational Resources Information Center

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  1. Errors Analysis of Students in Mathematics Department to Learn Plane Geometry

    NASA Astrophysics Data System (ADS)

    Mirna, M.

    2018-04-01

    This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.

  2. Study on the calibration and optimization of double theodolites baseline

    NASA Astrophysics Data System (ADS)

    Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao

    2018-01-01

    For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.

  3. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  4. Transient fault behavior in a microprocessor: A case study

    NASA Technical Reports Server (NTRS)

    Duba, Patrick

    1989-01-01

    An experimental analysis is described which studies the susceptibility of a microprocessor based jet engine controller to upsets caused by current and voltage transients. A design automation environment which allows the run time injection of transients and the tracing from their impact device to the pin level is described. The resulting error data are categorized by the charge levels of the injected transients by location and by their potential to cause logic upsets, latched errors, and pin errors. The results show a 3 picoCouloumb threshold, below which the transients have little impact. An Arithmetic and Logic Unit transient is most likely to result in logic upsets and pin errors (i.e., impact the external environment). The transients in the countdown unit are potentially serious since they can result in latched errors, thus causing latent faults. Suggestions to protect the processor against these errors, by incorporating internal error detection and transient suppression techniques, are also made.

  5. Is there any electrophysiological evidence for subliminal error processing?

    PubMed

    Shalgi, Shani; Deouell, Leon Y

    2013-08-29

    The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.

  6. Decay of motor memories in the absence of error

    PubMed Central

    Vaswani, Pavan A.; Shadmehr, Reza

    2013-01-01

    When motor commands are accompanied by an unexpected outcome, the resulting error induces changes in subsequent commands. However, when errors are artificially eliminated, changes in motor commands are not sustained, but show decay. Why does the adaptation-induced change in motor output decay in the absence of error? A prominent idea is that decay reflects the stability of the memory. We show results that challenge this idea and instead suggest that motor output decays because the brain actively disengages a component of the memory. Humans adapted their reaching movements to a perturbation and were then introduced to a long period of trials in which errors were absent (error-clamp). We found that, in some subjects, motor output did not decay at the onset of the error-clamp block, but a few trials later. We manipulated the kinematics of movements in the error-clamp block and found that as movements became more similar to subjects’ natural movements in the perturbation block, the lag to decay onset became longer and eventually reached hundreds of trials. Furthermore, when there was decay in the motor output, the endpoint of decay was not zero, but a fraction of the motor memory that was last acquired. Therefore, adaptation to a perturbation installed two distinct kinds of memories: one that was disengaged when the brain detected a change in the task, and one that persisted despite it. Motor memories showed little decay in the absence of error if the brain was prevented from detecting a change in task conditions. PMID:23637163

  7. Age effects shrink when motor learning is predominantly supported by nondeclarative, automatic memory processes: evidence from golf putting.

    PubMed

    Chauvel, Guillaume; Maquestiaux, François; Hartley, Alan A; Joubert, Sven; Didierjean, André; Masters, Rich S W

    2012-01-01

    Can motor learning be equivalent in younger and older adults? To address this question, 48 younger (M = 23.5 years) and 48 older (M = 65.0 years) participants learned to perform a golf-putting task in two different motor learning situations: one that resulted in infrequent errors or one that resulted in frequent errors. The results demonstrated that infrequent-error learning predominantly relied on nondeclarative, automatic memory processes whereas frequent-error learning predominantly relied on declarative, effortful memory processes: After learning, infrequent-error learners verbalized fewer strategies than frequent-error learners; at transfer, a concurrent, attention-demanding secondary task (tone counting) left motor performance of infrequent-error learners unaffected but impaired that of frequent-error learners. The results showed age-equivalent motor performance in infrequent-error learning but age deficits in frequent-error learning. Motor performance of frequent-error learners required more attention with age, as evidenced by an age deficit on the attention-demanding secondary task. The disappearance of age effects when nondeclarative, automatic memory processes predominated suggests that these processes are preserved with age and are available even early in motor learning.

  8. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  9. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  10. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  11. Hyperactive error responses and altered connectivity in ventromedial and frontoinsular cortices in obsessive-compulsive disorder.

    PubMed

    Stern, Emily R; Welsh, Robert C; Fitzgerald, Kate D; Gehring, William J; Lister, Jamey J; Himle, Joseph A; Abelson, James L; Taylor, Stephan F

    2011-03-15

    Patients with obsessive-compulsive disorder (OCD) show abnormal functioning in ventral frontal brain regions involved in emotional/motivational processes, including anterior insula/frontal operculum (aI/fO) and ventromedial frontal cortex (VMPFC). While OCD has been associated with an increased neural response to errors, the influence of motivational factors on this effect remains poorly understood. To investigate the contribution of motivational factors to error processing in OCD and to examine functional connectivity between regions involved in the error response, functional magnetic resonance imaging data were measured in 39 OCD patients (20 unmedicated, 19 medicated) and 38 control subjects (20 unmedicated, 18 medicated) during an error-eliciting interference task where motivational context was varied using monetary incentives (null, loss, and gain). Across all errors, OCD patients showed reduced deactivation of VMPFC and greater activation in left aI/FO compared with control subjects. For errors specifically resulting in a loss, patients further hyperactivated VMPFC, as well as right aI/FO. Independent of activity associated with task events, OCD patients showed greater functional connectivity between VMPFC and regions of bilateral aI/FO and right thalamus. Obsessive-compulsive disorder patients show greater activation in neural regions associated with emotion and valuation when making errors, which could be related to altered intrinsic functional connectivity between brain networks. These results highlight the importance of emotional/motivational responses to mistakes in OCD and point to the need for further study of network interactions in the disorder. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  12. The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.

    NASA Technical Reports Server (NTRS)

    Russell, J. M., III; Drayson, S. R.

    1972-01-01

    Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.

  13. Reducing number entry errors: solving a widespread, serious problem.

    PubMed

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  14. Post-error response inhibition in high math-anxious individuals: Evidence from a multi-digit addition task.

    PubMed

    Núñez-Peña, M Isabel; Tubau, Elisabet; Suárez-Pellicioni, Macarena

    2017-06-01

    The aim of the study was to investigate how high math-anxious (HMA) individuals react to errors in an arithmetic task. Twenty HMA and 19 low math-anxious (LMA) individuals were presented with a multi-digit addition verification task and were given response feedback. Post-error adjustment measures (response time and accuracy) were analyzed in order to study differences between groups when faced with errors in an arithmetical task. Results showed that both HMA and LMA individuals were slower to respond following an error than following a correct answer. However, post-error accuracy effects emerged only for the HMA group, showing that they were also less accurate after having committed an error than after giving the right answer. Importantly, these differences were observed only when individuals needed to repeat the same response given in the previous trial. These results suggest that, for HMA individuals, errors caused reactive inhibition of the erroneous response, facilitating performance if the next problem required the alternative response but hampering it if the response was the same. This stronger reaction to errors could be a factor contributing to the difficulties that HMA individuals experience in learning math and doing math tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Research on the error model of airborne celestial/inertial integrated navigation system

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaoqiang; Deng, Xiaoguo; Yang, Xiaoxu; Dong, Qiang

    2015-02-01

    Celestial navigation subsystem of airborne celestial/inertial integrated navigation system periodically correct the positioning error and heading drift of the inertial navigation system, by which the inertial navigation system can greatly improve the accuracy of long-endurance navigation. Thus the navigation accuracy of airborne celestial navigation subsystem directly decides the accuracy of the integrated navigation system if it works for long time. By building the mathematical model of the airborne celestial navigation system based on the inertial navigation system, using the method of linear coordinate transformation, we establish the error transfer equation for the positioning algorithm of airborne celestial system. Based on these we built the positioning error model of the celestial navigation. And then, based on the positioning error model we analyze and simulate the positioning error which are caused by the error of the star tracking platform with the MATLAB software. Finally, the positioning error model is verified by the information of the star obtained from the optical measurement device in range and the device whose location are known. The analysis and simulation results show that the level accuracy and north accuracy of tracking platform are important factors that limit airborne celestial navigation systems to improve the positioning accuracy, and the positioning error have an approximate linear relationship with the level error and north error of tracking platform. The error of the verification results are in 1000m, which shows that the model is correct.

  16. Proximal antecedents and correlates of adopted error approach: a self-regulatory perspective.

    PubMed

    Van Dyck, Cathy; Van Hooft, Edwin; De Gilder, Dick; Liesveld, Lillian

    2010-01-01

    The current study aims to further investigate earlier established advantages of an error mastery approach over an error aversion approach. The two main purposes of the study relate to (1) self-regulatory traits (i.e., goal orientation and action-state orientation) that may predict which error approach (mastery or aversion) is adopted, and (2) proximal, psychological processes (i.e., self-focused attention and failure attribution) that relate to adopted error approach. In the current study participants' goal orientation and action-state orientation were assessed, after which they worked on an error-prone task. Results show that learning goal orientation related to error mastery, while state orientation related to error aversion. Under a mastery approach, error occurrence did not result in cognitive resources "wasted" on self-consciousness. Rather, attention went to internal-unstable, thus controllable, improvement oriented causes of error. Participants that had adopted an aversion approach, in contrast, experienced heightened self-consciousness and attributed failure to internal-stable or external causes. These results imply that when working on an error-prone task, people should be stimulated to take on a mastery rather than an aversion approach towards errors.

  17. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    NASA Astrophysics Data System (ADS)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  18. Mismeasurement and the resonance of strong confounders: uncorrelated errors.

    PubMed

    Marshall, J R; Hastrup, J L

    1996-05-15

    Greenland first documented (Am J Epidemiol 1980; 112:564-9) that error in the measurement of a confounder could resonate--that it could bias estimates of other study variables, and that the bias could persist even with statistical adjustment for the confounder as measured. An important question is raised by this finding: can such bias be more than trivial within the bounds of realistic data configurations? The authors examine several situations involving dichotomous and continuous data in which a confounder and a null variable are measured with error, and they assess the extent of resultant bias in estimates of the effect of the null variable. They show that, with continuous variables, measurement error amounting to 40% of observed variance in the confounder could cause the observed impact of the null study variable to appear to alter risk by as much as 30%. Similarly, they show, with dichotomous independent variables, that 15% measurement error in the form of misclassification could lead the null study variable to appear to alter risk by as much as 50%. Such bias would result only from strong confounding. Measurement error would obscure the evidence that strong confounding is a likely problem. These results support the need for every epidemiologic inquiry to include evaluations of measurement error in each variable considered.

  19. An Investigation into Soft Error Detection Efficiency at Operating System Level

    PubMed Central

    Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894

  20. An investigation into soft error detection efficiency at operating system level.

    PubMed

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  1. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  2. Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals

    NASA Technical Reports Server (NTRS)

    Dempsey, Brian Paul

    1997-01-01

    Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.

  3. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    PubMed Central

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  4. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  5. Is there any electrophysiological evidence for subliminal error processing?

    PubMed Central

    Shalgi, Shani; Deouell, Leon Y.

    2013-01-01

    The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness. PMID:24009548

  6. High-precision pointing with the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Poppi, Sergio; Pernechele, Claudio; Pisanu, Tonino; Morsiani, Marco

    2010-07-01

    We present here the systems aimed to measure and minimize the pointing errors for the Sardinia Radio Telescope: they consist of an optical telescope to measure errors due to the mechanical structure deformations and a lasers system for the errors due to the subreflector displacement. We show here the results of the tests that we have done on the Medicina 32 meters VLBI radio telescope. The measurements demonstrate we can measure the pointing errors of the mechanical structure, with an accuracy of about ~1 arcsec. Moreover, we show the technique to measure the displacement of the subreflector, placed in the SRT at 22 meters from the main mirror, within +/-0.1 mm from its optimal position. These measurements show that we can obtain the needed accuracy to correct also the non repeatable pointing errors, which arise on time scale varying from seconds to minutes.

  7. Reducing measurement errors during functional capacity tests in elders.

    PubMed

    da Silva, Mariane Eichendorf; Orssatto, Lucas Bet da Rosa; Bezerra, Ewertton de Souza; Silva, Diego Augusto Santos; Moura, Bruno Monteiro de; Diefenthaeler, Fernando; Freitas, Cíntia de la Rocha

    2018-06-01

    Accuracy is essential to the validity of functional capacity measurements. To evaluate the error of measurement of functional capacity tests for elders and suggest the use of the technical error of measurement and credibility coefficient. Twenty elders (65.8 ± 4.5 years) completed six functional capacity tests that were simultaneously filmed and timed by four evaluators by means of a chronometer. A fifth evaluator timed the tests by analyzing the videos (reference data). The means of most evaluators for most tests were different from the reference (p < 0.05), except for two evaluators for two different tests. There were different technical error of measurement between tests and evaluators. The Bland-Altman test showed difference in the concordance of the results between methods. Short duration tests showed higher technical error of measurement than longer tests. In summary, tests timed by a chronometer underestimate the real results of the functional capacity. Difference between evaluators' reaction time and perception to determine the start and the end of the tests would justify the errors of measurement. Calculation of the technical error of measurement or the use of the camera can increase data validity.

  8. Intact error monitoring in combat Veterans with post-traumatic stress disorder.

    PubMed

    Swick, Diane; Honzel, Nikki; Turken, U

    2015-11-30

    The error-related negativity (ERN) is a neuroelectric signature of performance monitoring during speeded response time tasks. Previous studies indicate that individuals with anxiety disorders show ERN enhancements that correlate with the degree of clinical symptomology. Less is known about the error monitoring system in post-traumatic stress disorder (PTSD). PTSD is characterized by impairments in the regulation of fear and other emotional responses, as well as deficits in maintaining cognitive control. Here, combat Veterans with PTSD were compared to control Veterans in two different versions of the flanker task (n=13 or 14 per group). Replicating and extending previous findings, PTSD patients showed an intact ERN in both experiments. In addition, task performance and error compensation behavior were intact. Finally, ERN amplitude showed no relationship with self-reported PTSD, depression, or post-concussive symptoms. These results suggest that error monitoring represents a relative strength in PTSD that can dissociate from cognitive control functions that are impaired, such as response inhibition and sustained attention. A healthy awareness of errors in external actions could be leveraged to improve interoceptive awareness of emotional state. The results could have positive implications for PTSD treatments that rely on self-monitoring abilities, such as neurofeedback and mindfulness training. Published by Elsevier Ireland Ltd.

  9. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less

  10. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    NASA Astrophysics Data System (ADS)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  11. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  12. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  13. Improving the quality of self-monitoring blood glucose measurement: a study in reducing calibration errors.

    PubMed

    Baum, John M; Monhaut, Nanette M; Parker, Donald R; Price, Christopher P

    2006-06-01

    Two independent studies reported that 16% of people who self-monitor blood glucose used incorrectly coded meters. The degree of analytical error, however, was not characterized. Our study objectives were to demonstrate that miscoding can cause analytical errors and to characterize the potential amount of bias that can occur. The impact of calibration error with three selfblood glucose monitoring systems (BGMSs), one of which has an autocoding feature, is reported. Fresh capillary fingerstick blood from 50 subjects, 18 men and 32 women ranging in age from 23 to 82 years, was used to measure glucose with three BGMSs. Two BGMSs required manual coding and were purposely miscoded using numbers different from the one recommended for the reagent lot used. Two properly coded meters of each BGMS were included to assess within-system variability. Different reagent lots were used to challenge a third system that had autocoding capability and could not be miscoded. Some within-system comparisons showed deviations of greater than +/-30% when results obtained with miscoded meters were compared with data obtained with ones programmed using the correct code number. Similar erroneous results were found when the miscoded meter results were compared with those obtained with a glucose analyzer. For some miscoded meter and test strip combinations, error grid analysis showed that 90% of results fell into zones indicating altered clinical action. Such inaccuracies were not found with the BGMS having the autocoding feature. When certain meter code number settings of two BGMSs were used in conjunction with test strips having code numbers that did not match, statistically and clinically inaccurate results were obtained. Coding errors resulted in analytical errors of greater than +/-30% (-31.6 to +60.9%). These results confirm the value of a BGMS with an automatic coding feature.

  14. Stack Number Influence on the Accuracy of Aster Gdem (V2)

    NASA Astrophysics Data System (ADS)

    Mirzadeh, S. M. J.; Alizadeh Naeini, A.; Fatemi, S. B.

    2017-09-01

    In this research, the influence of stack number (STKN) on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad) were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.

  15. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Comparison of a single-view and a double-view aerosol optical depth retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Henderson, Bradley G.; Chylek, Petr

    2003-11-01

    We compare the results of a single-view and a double-view aerosol optical depth (AOD) retrieval algorithm applied to image pairs acquired over NASA Stennis Space Center, Mississippi. The image data were acquired by the Department of Energy's (DOE) Multispectral Thermal Imager (MTI), a pushbroom satellite imager with 15 bands from the visible to the thermal infrared. MTI has the ability to acquire imagery in pairs in which the first image is a near-nadir view and the second image is off-nadir with a zenith angle of approximately 60°. A total of 15 image pairs were used in the analysis. For a given image pair, AOD retrieval is performed twice---once using a single-view algorithm applied to the near-nadir image, then again using a double-view algorithm. Errors for both retrievals are computed by comparing the results to AERONET AOD measurements obtained at the same time and place. The single-view algorithm showed an RMS error about the mean of 0.076 in AOD units, whereas the double-view algorithm showed a modest improvement with an RMS error of 0.06. The single-view errors show a positive bias which is presumed to be a result of the empirical relationship used to determine ground reflectance in the visible. A plot of AOD error of the double-view algorithm versus time shows a noticeable trend which is interpreted to be a calibration drift. When this trend is removed, the RMS error of the double-view algorithm drops to 0.030. The single-view algorithm qualitatively appears to perform better during the spring and summer whereas the double-view algorithm seems to be less sensitive to season.

  17. Error analysis of the Golay3 optical imaging system.

    PubMed

    Wu, Quanying; Fan, Junliu; Wu, Feng; Zhao, Jun; Qian, Lin

    2013-05-01

    We use aberration theory to derive a generalized pupil function of the Golay3 imaging system when astigmatisms exist in its submirrors. Theoretical analysis and numerical simulation using ZEMAX show that the point spread function (PSF) and the modulation transfer function (MTF) of the Golay3 sparse aperture system have a periodic change when there are piston errors. When the peak-valley value of the wavefront (PV(tilt)) due to the tilt error increases from zero to λ, the PSF and the MTF change significantly, and the change direction is determined by the location of the submirror with the tilt error. When PV(tilt) becomes larger than λ, the PSF and the MTF remain unvaried. We calculate the peaks of the signal-to-noise ratio (PSNR) resulting from the piston and tilt errors according to the Strehl ratio, and show that the PSNR decreases when the errors increase.

  18. SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kathuria, K; Siebers, J

    2014-06-01

    Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and aremore » as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.« less

  19. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  20. Detecting imipenem resistance in Acinetobacter baumannii by automated systems (BD Phoenix, Microscan WalkAway, Vitek 2); high error rates with Microscan WalkAway

    PubMed Central

    2009-01-01

    Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298

  1. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.

  2. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    PubMed

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Error estimates for ice discharge calculated using the flux gate approach

    NASA Astrophysics Data System (ADS)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  4. Five-equation and robust three-equation methods for solution verification of large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dutta, Rabijit; Xing, Tao

    2018-02-01

    This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.

  5. Testing accelerometer rectification error caused by multidimensional composite inputs with double turntable centrifuge.

    PubMed

    Guan, W; Meng, X F; Dong, X M

    2014-12-01

    Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.

  6. Identification and Remediation of Phonological and Motor Errors in Acquired Sound Production Impairment

    PubMed Central

    Gagnon, Bernadine; Miozzo, Michele

    2017-01-01

    Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044

  7. Optimum employment of satellite indirect soundings as numerical model input

    NASA Technical Reports Server (NTRS)

    Horn, L. H.; Derber, J. C.; Koehler, T. L.; Schmidt, B. D.

    1981-01-01

    The characteristics of satellite-derived temperature soundings that would significantly affect their use as input for numerical weather prediction models were examined. Independent evaluations of satellite soundings were emphasized to better define error characteristics. Results of a Nimbus-6 sounding study reveal an underestimation of the strength of synoptic scale troughs and ridges, and associated gradients in isobaric height and temperature fields. The most significant errors occurred near the Earth's surface and the tropopause. Soundings from the TIROS-N and NOAA-6 satellites were also evaluated. Results again showed an underestimation of upper level trough amplitudes leading to weaker thermal gradient depictions in satellite-only fields. These errors show a definite correlation to the synoptic flow patterns. In a satellite-only analysis used to initialize a numerical model forecast, it was found that these synoptically correlated errors were retained in the forecast sequence.

  8. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  9. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  10. Aberrant error processing in relation to symptom severity in obsessive–compulsive disorder: A multimodal neuroimaging study

    PubMed Central

    Agam, Yigal; Greenberg, Jennifer L.; Isom, Marlisa; Falkenstein, Martha J.; Jenike, Eric; Wilhelm, Sabine; Manoach, Dara S.

    2014-01-01

    Background Obsessive–compulsive disorder (OCD) is characterized by maladaptive repetitive behaviors that persist despite feedback. Using multimodal neuroimaging, we tested the hypothesis that this behavioral rigidity reflects impaired use of behavioral outcomes (here, errors) to adaptively adjust responses. We measured both neural responses to errors and adjustments in the subsequent trial to determine whether abnormalities correlate with symptom severity. Since error processing depends on communication between the anterior and the posterior cingulate cortex, we also examined the integrity of the cingulum bundle with diffusion tensor imaging. Methods Participants performed the same antisaccade task during functional MRI and electroencephalography sessions. We measured error-related activation of the anterior cingulate cortex (ACC) and the error-related negativity (ERN). We also examined post-error adjustments, indexed by changes in activation of the default network in trials surrounding errors. Results OCD patients showed intact error-related ACC activation and ERN, but abnormal adjustments in the post- vs. pre-error trial. Relative to controls, who responded to errors by deactivating the default network, OCD patients showed increased default network activation including in the rostral ACC (rACC). Greater rACC activation in the post-error trial correlated with more severe compulsions. Patients also showed increased fractional anisotropy (FA) in the white matter underlying rACC. Conclusions Impaired use of behavioral outcomes to adaptively adjust neural responses may contribute to symptoms in OCD. The rACC locus of abnormal adjustment and relations with symptoms suggests difficulty suppressing emotional responses to aversive, unexpected events (e.g., errors). Increased structural connectivity of this paralimbic default network region may contribute to this impairment. PMID:25057466

  11. Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging

    PubMed Central

    Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.

    2017-01-01

    Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049

  12. Refractive optics to compensate x-ray mirror shape-errors

    NASA Astrophysics Data System (ADS)

    Laundy, David; Sawhney, Kawal; Dhamgaye, Vishal; Pape, Ian

    2017-08-01

    Elliptically profiled mirrors operating at glancing angle are frequently used at X-ray synchrotron sources to focus X-rays into sub-micrometer sized spots. Mirror figure error, defined as the height difference function between the actual mirror surface and the ideal elliptical profile, causes a perturbation of the X-ray wavefront for X- rays reflecting from the mirror. This perturbation, when propagated to the focal plane results in an increase in the size of the focused beam. At Diamond Light Source we are developing refractive optics that can be used to locally cancel out the wavefront distortion caused by figure error from nano-focusing elliptical mirrors. These optics could be used to correct existing optical components on synchrotron radiation beamlines in order to give focused X-ray beam sizes approaching the theoretical diffraction limit. We present our latest results showing measurement of the X-ray wavefront error after reflection from X-ray mirrors and the translation of the measured wavefront into a design for refractive optical elements for correction of the X-ray wavefront. We show measurement of the focused beam with and without the corrective optics inserted showing reduction in the size of the focus resulting from the correction to the wavefront.

  13. The comparison of cervical repositioning errors according to smartphone addiction grades.

    PubMed

    Lee, Jeonhyeong; Seo, Kyochul

    2014-04-01

    [Purpose] The purpose of this study was to compare cervical repositioning errors according to smartphone addiction grades of adults in their 20s. [Subjects and Methods] A survey of smartphone addiction was conducted of 200 adults. Based on the survey results, 30 subjects were chosen to participate in this study, and they were divided into three groups of 10; a Normal Group, a Moderate Addiction Group, and a Severe Addiction Group. After attaching a C-ROM, we measured the cervical repositioning errors of flexion, extension, right lateral flexion and left lateral flexion. [Results] Significant differences in the cervical repositioning errors of flexion, extension, and right and left lateral flexion were found among the Normal Group, Moderate Addiction Group, and Severe Addiction Group. In particular, the Severe Addiction Group showed the largest errors. [Conclusion] The result indicates that as smartphone addiction becomes more severe, a person is more likely to show impaired proprioception, as well as impaired ability to recognize the right posture. Thus, musculoskeletal problems due to smartphone addiction should be resolved through social cognition and intervention, and physical therapeutic education and intervention to educate people about correct postures.

  14. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  15. Analysis of frequency mixing error on heterodyne interferometric ellipsometry

    NASA Astrophysics Data System (ADS)

    Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan

    2007-11-01

    A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.

  16. Prevalence and types of articulation errors in Saudi Arabic-speaking children with repaired cleft lip and palate.

    PubMed

    Albustanji, Yusuf M; Albustanji, Mahmoud M; Hegazi, Mohamed M; Amayreh, Mousa M

    2014-10-01

    The purpose of this study was to assess prevalence and types of consonant production errors and phonological processes in Saudi Arabic-speaking children with repaired cleft lip and palate, and to determine the relationship between frequency of errors on one hand and the type of the cleft. Possible relationship between age, gender and frequency of errors was also investigated. Eighty Saudi children with repaired cleft lip and palate aged 6-15 years (mean 6.7 years), underwent speech, language, and hearing evaluation. The diagnosis of articulation deficits was based on the results of an Arabic articulation test. Phonological processes were reported based on the productivity scale of a minimum 20% of occurrence. Diagnosis of nasality was based on a 5-point scale that reflects severity from 0 through 4. All participants underwent intraoral examination, informal language assessment, and hearing evaluation to assess their speech and language abilities. The Chi-Square test for independence was used to analyze the results of consonant production as a function of type of CLP and age. Out of 80 participants with CLP, 21 participants had normal articulation and resonance, 59 of participants (74%) showed speech abnormalities. Twenty-one of these 59 participants showed only articulation errors; 17 showed only hypernasality; and 21 showed both articulation and resonance deficits. CAs were observed in 20 participant. The productive phonological processes were consonant backing, final consonant deletion, gliding, and stopping. At age 6 and older, 37% of participants had persisting hearing loss. Despite early age at time of surgery (mean 6.7 months) for the studied CLP participants in this study, a substantial number of them demonstrated articulation errors and hypernasality. The results showed desirable findings for diverse languages. It is especially interesting to consider the prevalence of glottal stops and pharyngeal fricatives in a population for whom these sound are phonemic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  18. Electrophysiological Correlates of Error Monitoring and Feedback Processing in Second Language Learning.

    PubMed

    Bultena, Sybrine; Danielmeier, Claudia; Bekkering, Harold; Lemhöfer, Kristin

    2017-01-01

    Humans monitor their behavior to optimize performance, which presumably relies on stable representations of correct responses. During second language (L2) learning, however, stable representations have yet to be formed while knowledge of the first language (L1) can interfere with learning, which in some cases results in persistent errors. In order to examine how correct L2 representations are stabilized, this study examined performance monitoring in the learning process of second language learners for a feature that conflicts with their first language. Using EEG, we investigated if L2 learners in a feedback-guided word gender assignment task showed signs of error detection in the form of an error-related negativity (ERN) before and after receiving feedback, and how feedback is processed. The results indicated that initially, response-locked negativities for correct (CRN) and incorrect (ERN) responses were of similar size, showing a lack of internal error detection when L2 representations are unstable. As behavioral performance improved following feedback, the ERN became larger than the CRN, pointing to the first signs of successful error detection. Additionally, we observed a second negativity following the ERN/CRN components, the amplitude of which followed a similar pattern as the previous negativities. Feedback-locked data indicated robust FRN and P300 effects in response to negative feedback across different rounds, demonstrating that feedback remained important in order to update memory representations during learning. We thus show that initially, L2 representations may often not be stable enough to warrant successful error monitoring, but can be stabilized through repeated feedback, which means that the brain is able to overcome L1 interference, and can learn to detect errors internally after a short training session. The results contribute a different perspective to the discussion on changes in ERN and FRN components in relation to learning, by extending the investigation of these effects to the language learning domain. Furthermore, these findings provide a further characterization of the online learning process of L2 learners.

  19. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  20. Error-related negativities elicited by monetary loss and cues that predict loss.

    PubMed

    Dunning, Jonathan P; Hajcak, Greg

    2007-11-19

    Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.

  1. An evaluation of programmed treatment-integrity errors during discrete-trial instruction.

    PubMed

    Carroll, Regina A; Kodak, Tiffany; Fisher, Wayne W

    2013-01-01

    This study evaluated the effects of programmed treatment-integrity errors on skill acquisition for children with an autism spectrum disorder (ASD) during discrete-trial instruction (DTI). In Study 1, we identified common treatment-integrity errors that occur during academic instruction in schools. In Study 2, we simultaneously manipulated 3 integrity errors during DTI. In Study 3, we evaluated the effects of each of the 3 integrity errors separately on skill acquisition during DTI. Results showed that participants either demonstrated slower skill acquisition or did not acquire the target skills when instruction included treatment-integrity errors. © Society for the Experimental Analysis of Behavior.

  2. Analysis of Covariance: Is It the Appropriate Model to Study Change?

    ERIC Educational Resources Information Center

    Marston, Paul T., Borich, Gary D.

    The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…

  3. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  4. Technology and medication errors: impact in nursing homes.

    PubMed

    Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis

    2014-01-01

    The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.

  5. Verification of different forecasts of Hungarian Meteorological Service

    NASA Astrophysics Data System (ADS)

    Feher, B.

    2009-09-01

    In this paper I show the results of the forecasts made by the Hungarian Meteorological Service. I focus on the general short- and medium-range forecasts, which contains cloudiness, precipitation, wind speed and temperature for six regions of Hungary. I would like to show the results of some special forecasts as well, such as precipitation predictions which are made for the catchment area of Danube and Tisza rivers, and daily mean temperature predictions used by Hungarian energy companies. The product received by the user is made by the general forecaster, but these predictions are based on the ALADIN and ECMWF outputs. Because of these, the product of the forecaster and the models were also verified. Method like this is able to show us, which weather elements are more difficult to forecast or which regions have higher errors. During the verification procedure the basic errors (mean error, mean absolute error) are calculated. Precipitation amount is classified into five categories, and scores like POD, TS, PC,…etc. were defined by contingency table determined by these categories. The procedure runs fully automatically, all the things forecasters have to do is to print the daily result each morning. Beside the daily result, verification is also made for longer periods like week, month or year. Analyzing the results of longer periods we can say that the best predictions are made for the first few days, and precipitation forecasts are less good for mountainous areas, even, the scores of the forecasters sometimes are higher than the errors of the models. Since forecaster receive results next day, it can helps him/her to reduce mistakes and learn the weakness of the models. This paper contains the verification scores, their trends, the method by which these scores are calculated, and some case studies on worse forecasts.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Ke; Li Yanqiu; Wang Hai

    Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less

  7. Response Monitoring and Adjustment: Differential Relations with Psychopathic Traits

    PubMed Central

    Bresin, Konrad; Finy, M. Sima; Sprague, Jenessa; Verona, Edelyn

    2014-01-01

    Studies on the relation between psychopathy and cognitive functioning often show mixed results, partially because different factors of psychopathy have not been considered fully. Based on previous research, we predicted divergent results based on a two-factor model of psychopathy (interpersonal-affective traits and impulsive-antisocial traits). Specifically, we predicted that the unique variance of interpersonal-affective traits would be related to increased monitoring (i.e., error-related negativity) and adjusting to errors (i.e., post-error slowing), whereas impulsive-antisocial traits would be related to reductions in these processes. Three studies using a diverse selection of assessment tools, samples, and methods are presented to identify response monitoring correlates of the two main factors of psychopathy. In Studies 1 (undergraduates), 2 (adolescents), and 3 (offenders), interpersonal-affective traits were related to increased adjustment following errors and, in Study 3, to enhanced monitoring of errors. Impulsive-antisocial traits were not consistently related to error adjustment across the studies, although these traits were related to a deficient monitoring of errors in Study 3. The results may help explain previous mixed findings and advance implications for etiological models of psychopathy. PMID:24933282

  8. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  9. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    NASA Astrophysics Data System (ADS)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  10. Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students

    ERIC Educational Resources Information Center

    Muzangwa, Jonatan; Chifamba, Peter

    2012-01-01

    This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…

  11. Effect of gyro verticality error on lateral autoland tracking performance for an inertially smoothed control law

    NASA Technical Reports Server (NTRS)

    Thibodeaux, J. J.

    1977-01-01

    The results of a simulation study performed to determine the effects of gyro verticality error on lateral autoland tracking and landing performance are presented. A first order vertical gyro error model was used to generate the measurement of the roll attitude feedback signal normally supplied by an inertial navigation system. The lateral autoland law used was an inertially smoothed control design. The effect of initial angular gyro tilt errors (2 deg, 3 deg, 4 deg, and 5 deg), introduced prior to localizer capture, were investigated by use of a small perturbation aircraft simulation. These errors represent the deviations which could occur in the conventional attitude sensor as a result of the maneuver-induced spin-axis misalinement and drift. Results showed that for a 1.05 deg per minute erection rate and a 5 deg initial tilt error, ON COURSE autoland control logic was not satisfied. Failure to attain the ON COURSE mode precluded high control loop gains and localizer beam path integration and resulted in unacceptable beam standoff at touchdown.

  12. An empirical examination of WISE/NEOWISE asteroid analysis and results

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2017-10-01

    Observations made by the WISE space telescope and subsequent analysis by the NEOWISE project represent the largest corpus of asteroid data to date, describing the diameter, albedo, and other properties of the ~164,000 asteroids in the collection. I present a critical reanalysis of the WISE observational data, and NEOWISE results published in numerous papers and in the JPL Planetary Data System (PDS). This analysis reveals shortcomings and a lack of clarity, both in the original analysis and in the presentation of results. The procedures used to generate NEOWISE results fall short of established thermal modelling standards. Rather than using a uniform protocol, 10 modelling methods were applied to 12 combinations of WISE band data. Over half the NEOWISE results are based on a single band of data. Most NEOWISE curve fits are poor quality, frequently missing many or all the data points. About 30% of the single-band results miss all the data; 43% of the results derived from the most common multiple-band combinations miss all the data in at least one band. The NEOWISE data processing procedures rely on inconsistent assumptions, and introduce bias by systematically discarding much of the original data. I show that error estimates for the WISE observational data have a true uncertainty factor of ~1.2 to 1.9 times larger than previously described, and that the error estimates do not fit a normal distribution. These issues call into question the validity of the NEOWISE Monte-Carlo error analysis. Comparing published NEOWISE diameters to published estimates using radar, occultation, or spacecraft measurements (ROS) reveals 150 for which the NEOWISE diameters were copied exactly from the ROS source. My findings show that the accuracy of diameter estimates for NEOWISE results depend heavily on the choice of data bands and model. Systematic errors in the diameter estimates are much larger than previously described. Systematic errors for diameters in the PDS range from -3% to +27%. Random errors range from -14% to +19% when using all four WISE bands, and from -45% to +74% in cases using only the W2 band. The results presented here show that much work remains to be done towards understanding asteroid data from WISE/NEOWISE.

  13. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  14. Students’ Errors in Geometry Viewed from Spatial Intelligence

    NASA Astrophysics Data System (ADS)

    Riastuti, N.; Mardiyana, M.; Pramudya, I.

    2017-09-01

    Geometry is one of the difficult materials because students must have ability to visualize, describe images, draw shapes, and know the kind of shapes. This study aim is to describe student error based on Newmans’ Error Analysis in solving geometry problems viewed from spatial intelligence. This research uses descriptive qualitative method by using purposive sampling technique. The datas in this research are the result of geometri material test and interview by the 8th graders of Junior High School in Indonesia. The results of this study show that in each category of spatial intelligence has a different type of error in solving the problem on the material geometry. Errors are mostly made by students with low spatial intelligence because they have deficiencies in visual abilities. Analysis of student error viewed from spatial intelligence is expected to help students do reflection in solving the problem of geometry.

  15. Neuromotor Noise Is Malleable by Amplifying Perceived Errors

    PubMed Central

    Zhang, Zhaoran; Abe, Masaki O.; Sternad, Dagmar

    2016-01-01

    Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants’ corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197

  16. AXAF Alignment Test System Autocollimating Flat Error Correction

    NASA Technical Reports Server (NTRS)

    Lewis, Timothy S.

    1995-01-01

    The alignment test system for the advanced x ray astrophysics facility (AXAF) high-resolution mirror assembly (HRMA) determines the misalignment of the HRMA by measuring the displacement of a beam of light reflected by the HRMA mirrors and an autocollimating flat (ACF). This report shows how to calibrate the system to compensate for errors introduced by the ACF, using measurements taken with the ACF in different positions. It also shows what information can be obtained from alignment test data regarding errors in the shapes of the HRMA mirrors. Simulated results based on measured ACF surface data are presented.

  17. Factors correlated with traffic accidents as a basis for evaluating Advanced Driver Assistance Systems.

    PubMed

    Staubach, Maria

    2009-09-01

    This study aims to identify factors which influence and cause errors in traffic accidents and to use these as a basis for information to guide the application and design of driver assistance systems. A total of 474 accidents were examined in depth for this study by means of a psychological survey, data from accident reports, and technical reconstruction information. An error analysis was subsequently carried out, taking into account the driver, environment, and vehicle sub-systems. Results showed that all accidents were influenced by errors as a consequence of distraction and reduced activity. For crossroad accidents, there were further errors resulting from sight obstruction, masked stimuli, focus errors, and law infringements. Lane departure crashes were additionally caused by errors as a result of masked stimuli, law infringements, expectation errors as well as objective and action slips, while same direction accidents occurred additionally because of focus errors, expectation errors, and objective and action slips. Most accidents were influenced by multiple factors. There is a safety potential for Advanced Driver Assistance Systems (ADAS), which support the driver in information assimilation and help to avoid distraction and reduced activity. The design of the ADAS is dependent on the specific influencing factors of the accident type.

  18. When idols look into the future: fair treatment modulates the affective forecasting error in talent show candidates.

    PubMed

    Feys, Marjolein; Anseel, Frederik

    2015-03-01

    People's affective forecasts are often inaccurate because they tend to overestimate how they will feel after an event. As life decisions are often based on affective forecasts, it is crucial to find ways to manage forecasting errors. We examined the impact of a fair treatment on forecasting errors in candidates in a Belgian reality TV talent show. We found that perceptions of fair treatment increased the forecasting error for losers (a negative audition decision) but decreased it for winners (a positive audition decision). For winners, this effect was even more pronounced when candidates were highly invested in their self-view as a future pop idol whereas for losers, the effect was more pronounced when importance was low. The results in this study point to a potential paradox between maximizing happiness and decreasing forecasting errors. A fair treatment increased the forecasting error for losers, but actually made them happier. © 2014 The British Psychological Society.

  19. Risk prediction and aversion by anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2007-12-01

    The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.

  20. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. An improved procedure for the validation of satellite-based precipitation estimates

    NASA Astrophysics Data System (ADS)

    Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad

    2015-09-01

    The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.

  2. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  3. Bias correction of bounded location errors in presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.

    2017-01-01

    Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.

  4. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  5. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-01-01

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  6. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.

    PubMed

    Marvian, Milad; Lidar, Daniel A

    2017-01-20

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  7. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  8. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  9. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    PubMed

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  10. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    PubMed Central

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-01-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086

  11. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    NASA Astrophysics Data System (ADS)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  12. Mimicking aphasic semantic errors in normal speech production: evidence from a novel experimental paradigm.

    PubMed

    Hodgson, Catherine; Lambon Ralph, Matthew A

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.

  13. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  14. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Perricone, B. T.

    1982-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  15. Tropospheric Correction for InSAR Using Interpolated ECMWF Data and GPS Zenith Total Delay

    NASA Technical Reports Server (NTRS)

    Webb, Frank H.; Fishbein, Evan F.; Moore, Angelyn W.; Owen, Susan E.; Fielding, Eric J.; Granger, Stephanie L.; Bjorndahl, Fredrik; Lofgren Johan

    2011-01-01

    To mitigate atmospheric errors caused by the troposphere, which is a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging, a tropospheric correction method has been developed using data from the European Centre for Medium- Range Weather Forecasts (ECMWF) and the Global Positioning System (GPS). The ECMWF data was interpolated using a Stretched Boundary Layer Model (SBLM), and ground-based GPS estimates of the tropospheric delay from the Southern California Integrated GPS Network were interpolated using modified Gaussian and inverse distance weighted interpolations. The resulting Zenith Total Delay (ZTD) correction maps have been evaluated, both separately and using a combination of the two data sets, for three short-interval InSAR pairs from Envisat during 2006 on an area stretching from northeast from the Los Angeles basin towards Death Valley. Results show that the root mean square (rms) in the InSAR images was greatly reduced, meaning a significant reduction in the atmospheric noise of up to 32 percent. However, for some of the images, the rms increased and large errors remained after applying the tropospheric correction. The residuals showed a constant gradient over the area, suggesting that a remaining orbit error from Envisat was present. The orbit reprocessing in ROI_pac and the plane fitting both require that the only remaining error in the InSAR image be the orbit error. If this is not fulfilled, the correction can be made anyway, but it will be done using all remaining errors assuming them to be orbit errors. By correcting for tropospheric noise, the biggest error source is removed, and the orbit error becomes apparent and can be corrected for

  16. Sensitivity analysis of non-cohesive sediment transport formulae

    NASA Astrophysics Data System (ADS)

    Pinto, Lígia; Fortunato, André B.; Freire, Paula

    2006-10-01

    Sand transport models are often based on semi-empirical equilibrium transport formulae that relate sediment fluxes to physical properties such as velocity, depth and characteristic sediment grain sizes. In engineering applications, errors in these physical properties affect the accuracy of the sediment fluxes. The present analysis quantifies error propagation from the input physical properties to the sediment fluxes, determines which ones control the final errors, and provides insight into the relative strengths, weaknesses and limitations of four total load formulae (Ackers and White, Engelund and Hansen, van Rijn, and Karim and Kennedy) and one bed load formulation (van Rijn). The various sources of uncertainty are first investigated individually, in order to pinpoint the key physical properties that control the errors. Since the strong non-linearity of most sand transport formulae precludes analytical approaches, a Monte Carlo method is validated and used in the analysis. Results show that the accuracy in total sediment transport evaluations is mainly determined by errors in the current velocity and in the sediment median grain size. For the bed load transport using the van Rijn formula, errors in the current velocity alone control the final accuracy. In a final set of tests, all physical properties are allowed to vary simultaneously in order to analyze the combined effect of errors. The combined effect of errors in all the physical properties is then compared to an estimate of the errors due to the intrinsic limitations of the formulae. Results show that errors in the physical properties can be dominant for typical uncertainties associated with these properties, particularly for small depths. A comparison between the various formulae reveals that the van Rijn formula is more sensitive to basic physical properties. Hence, it should only be used when physical properties are known with precision.

  17. Documentation of study medication dispensing in a prospective large randomized clinical trial: experiences from the ARISTOTLE Trial.

    PubMed

    Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B

    2013-09-01

    In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.

  18. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  19. Spatial autocorrelation among automated geocoding errors and its effects on testing for disease clustering

    PubMed Central

    Li, Jie; Fang, Xiangming

    2010-01-01

    Automated geocoding of patient addresses is an important data assimilation component of many spatial epidemiologic studies. Inevitably, the geocoding process results in positional errors. Positional errors incurred by automated geocoding tend to reduce the power of tests for disease clustering and otherwise affect spatial analytic methods. However, there are reasons to believe that the errors may often be positively spatially correlated and that this may mitigate their deleterious effects on spatial analyses. In this article, we demonstrate explicitly that the positional errors associated with automated geocoding of a dataset of more than 6000 addresses in Carroll County, Iowa are spatially autocorrelated. Furthermore, through two simulation studies of disease processes, including one in which the disease process is overlain upon the Carroll County addresses, we show that spatial autocorrelation among geocoding errors maintains the power of two tests for disease clustering at a level higher than that which would occur if the errors were independent. Implications of these results for cluster detection, privacy protection, and measurement-error modeling of geographic health data are discussed. PMID:20087879

  20. Simulating Children's Retrieval Errors in Picture-Naming: A Test of Foygel and Dell's (2000) Semantic/Phonological Model of Speech Production

    ERIC Educational Resources Information Center

    Budd, Mary-Jane; Hanley, J. Richard; Griffiths, Yvonne

    2011-01-01

    This study investigated whether Foygel and Dell's (2000) interactive two-step model of speech production could simulate the number and type of errors made in picture-naming by 68 children of elementary-school age. Results showed that the model provided a satisfactory simulation of the mean error profile of children aged five, six, seven, eight and…

  1. Demonstration of spectral calibration for stellar interferometry

    NASA Technical Reports Server (NTRS)

    Demers, Richard T.; An, Xin; Tang, Hong; Rud, Mayer; Wayne, Leonard; Kissil, Andrew; Kwack, Eug-Yun

    2006-01-01

    A breadboard is under development to demonstrate the calibration of spectral errors in microarcsecond stellar interferometers. Analysis shows that thermally and mechanically stable hardware in addition to careful optical design can reduce the wavelength dependent error to tens of nanometers. Calibration of the hardware can further reduce the error to the level of picometers. The results of thermal, mechanical and optical analysis supporting the breadboard design will be shown.

  2. Aging and the intrusion superiority effect in visuo-spatial working memory.

    PubMed

    Cornoldi, Cesare; Bassani, Chiara; Berto, Rita; Mammarella, Nicola

    2007-01-01

    This study investigated the active component of visuo-spatial working memory (VSWM) in younger and older adults testing the hypotheses that elderly individuals have a poorer performance than younger ones and that errors in active VSWM tasks depend, at least partially, on difficulties in avoiding intrusions (i.e., avoiding already activated information). In two experiments, participants were presented with sequences of matrices on which three positions were pointed out sequentially: their task was to process all the positions but indicate only the final position of each sequence. Results showed a poorer performance in the elderly compared to the younger group and a higher number of intrusion (errors due to activated but irrelevant positions) rather than invention (errors consisting of pointing out a position never indicated by the experiementer) errors. The number of errors increased when a concurrent task was introduced (Experiment 1) and it was affected by different patterns of matrices (Experiment 2). In general, results show that elderly people have an impaired VSWM and produce a large number of errors due to inhibition failures. However, both the younger and the older adults' visuo-spatial working memory was affected by the presence of activated irrelevant information, the reduction of the available resources, and task constraints.

  3. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    PubMed Central

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  4. Learning to Fail in Aphasia: An Investigation of Error Learning in Naming

    PubMed Central

    Middleton, Erica L.; Schwartz, Myrna F.

    2013-01-01

    Purpose To determine if the naming impairment in aphasia is influenced by error learning and if error learning is related to type of retrieval strategy. Method Nine participants with aphasia and ten neurologically-intact controls named familiar proper noun concepts. When experiencing tip-of-the-tongue naming failure (TOT) in an initial TOT-elicitation phase, participants were instructed to adopt phonological or semantic self-cued retrieval strategies. In the error learning manipulation, items evoking TOT states during TOT-elicitation were randomly assigned to a short or long time condition where participants were encouraged to continue to try to retrieve the name for either 20 seconds (short interval) or 60 seconds (long). The incidence of TOT on the same items was measured on a post test after 48-hours. Error learning was defined as a higher rate of recurrent TOTs (TOT at both TOT-elicitation and post test) for items assigned to the long (versus short) time condition. Results In the phonological condition, participants with aphasia showed error learning whereas controls showed a pattern opposite to error learning. There was no evidence for error learning in the semantic condition for either group. Conclusion Error learning is operative in aphasia, but dependent on the type of strategy employed during naming failure. PMID:23816662

  5. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    ERIC Educational Resources Information Center

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  6. Throughput of Coded Optical CDMA Systems with AND Detectors

    NASA Astrophysics Data System (ADS)

    Memon, Kehkashan A.; Umrani, Fahim A.; Umrani, A. W.; Umrani, Naveed A.

    2012-09-01

    Conventional detection techniques used in optical code-division multiple access (OCDMA) systems are not optimal and result in poor bit error rate performance. This paper analyzes the coded performance of optical CDMA systems with AND detectors for enhanced throughput efficiencies and improved error rate performance. The results show that the use of AND detectors significantly improve the performance of an optical channel.

  7. A high-accuracy two-position alignment inertial navigation system for lunar rovers aided by a star sensor with a calibration and positioning function

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming

    2016-12-01

    An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.

  8. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  9. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    DOE PAGES

    Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.

    2015-01-23

    DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.

  10. Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Huerta, E. A.; Gair, Jonathan R.

    2009-04-01

    We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.

  11. Dwell time method based on Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Ma, Zhen

    2017-10-01

    When the noise in the surface error data given by the interferometer has no effect on the iterative convergence of the RL algorithm, the RL algorithm for deconvolution in image restoration can be applied to the CCOS model to solve the dwell time. By extending the initial error function on the edge and denoising the noise in the surface error data given by the interferometer , it makes the result more available . The simulation results show the final residual error 10.7912nm nm in PV and 0.4305 nm in RMS, when the initial surface error is 107.2414 nm in PV and 15.1331 nm in RMS. The convergence rates of the PV and RMS values can reach up to 89.9% and 96.0%, respectively . The algorithms can satisfy the requirement of fabrication very well.

  12. Simulation of an automatically-controlled STOL aircraft in a microwave landing system multipath environment

    NASA Technical Reports Server (NTRS)

    Toda, M.; Brown, S. C.; Burrous, C. N.

    1976-01-01

    The simulated response is described of a STOL aircraft to Microwave Landing System (MLS) multipath errors during final approach and touchdown. The MLS azimuth, elevation, and DME multipath errors were computed for a relatively severe multipath environment at Crissy Field California, utilizing an MLS multipath simulation at MIT Lincoln Laboratory. A NASA/Ames six-degree-of-freedom simulation of an automatically-controlled deHavilland C-8A STOL aircraft was used to determine the response to these errors. The results show that the aircraft response to all of the Crissy Field MLS multipath errors was small. The small MLS azimuth and elevation multipath errors did not result in any discernible aircraft motion, and the aircraft response to the relatively large (200-ft (61-m) peak) DME multipath was noticeable but small.

  13. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  14. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    NASA Astrophysics Data System (ADS)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  15. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  16. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  17. Writing errors as a result of frontal dysfunction in Japanese patients with amyotrophic lateral sclerosis.

    PubMed

    Tsuji-Akimoto, Sachiko; Hamada, Shinsuke; Yabe, Ichiro; Tamura, Itaru; Otsuki, Mika; Kobashi, Syoji; Sasaki, Hidenao

    2010-12-01

    Loss of communication is a critical problem for advanced amyotrophic lateral sclerosis (ALS) patients. This loss of communication is mainly caused by severe dysarthria and disability of the dominant hand. However, reports show that about 50% of ALS patients have mild cognitive dysfunction, and there are a considerable number of case reports on Japanese ALS patients with agraphia. To clarify writing disabilities in non-demented ALS patients, eighteen non-demented ALS patients and 16 controls without neurological disorders were examined for frontal cognitive function and writing ability. To assess writing errors statistically, we scored them on their composition ability with the original writing error index (WEI). The ALS and control groups did not differ significantly with regard to age, years of education, or general cognitive level. Two patients could not write a letter because of disability of the dominant hand. The WEI and results of picture arrangement tests indicated significant impairment in the ALS patients. Auditory comprehension (Western Aphasia Battery; WAB IIC) and kanji dictation also showed mild impairment. Patients' writing errors consisted of both syntactic and letter-writing mistakes. Omission, substitution, displacement, and inappropriate placement of the phonic marks of kana were observed; these features have often been reported in Japanese patients with agraphia resulted from a frontal lobe lesion. The most frequent type of error was an omission of kana, the next most common was a missing subject. Writing errors might be a specific deficit for some non-demented ALS patients.

  18. Evidence of Selection against Complex Mitotic-Origin Aneuploidy during Preimplantation Development

    PubMed Central

    McCoy, Rajiv C.; Demko, Zachary P.; Ryan, Allison; Banjevic, Milena; Hill, Matthew; Sigurjonsson, Styrmir; Rabinowitz, Matthew; Petrov, Dmitri A.

    2015-01-01

    Whole-chromosome imbalances affect over half of early human embryos and are the leading cause of pregnancy loss. While these errors frequently arise in oocyte meiosis, many such whole-chromosome abnormalities affecting cleavage-stage embryos are the result of chromosome missegregation occurring during the initial mitotic cell divisions. The first wave of zygotic genome activation at the 4–8 cell stage results in the arrest of a large proportion of embryos, the vast majority of which contain whole-chromosome abnormalities. Thus, the full spectrum of meiotic and mitotic errors can only be detected by sampling after the initial cell divisions, but prior to this selective filter. Here, we apply 24-chromosome preimplantation genetic screening (PGS) to 28,052 single-cell day-3 blastomere biopsies and 18,387 multi-cell day-5 trophectoderm biopsies from 6,366 in vitro fertilization (IVF) cycles. We precisely characterize the rates and patterns of whole-chromosome abnormalities at each developmental stage and distinguish errors of meiotic and mitotic origin without embryo disaggregation, based on informative chromosomal signatures. We show that mitotic errors frequently involve multiple chromosome losses that are not biased toward maternal or paternal homologs. This outcome is characteristic of spindle abnormalities and chaotic cell division detected in previous studies. In contrast to meiotic errors, our data also show that mitotic errors are not significantly associated with maternal age. PGS patients referred due to previous IVF failure had elevated rates of mitotic error, while patients referred due to recurrent pregnancy loss had elevated rates of meiotic error, controlling for maternal age. These results support the conclusion that mitotic error is the predominant mechanism contributing to pregnancy losses occurring prior to blastocyst formation. This high-resolution view of the full spectrum of whole-chromosome abnormalities affecting early embryos provides insight into the cytogenetic mechanisms underlying their formation and the consequences for human fertility. PMID:26491874

  19. Reaching nearby sources: comparison between real and virtual sound and visual targets

    PubMed Central

    Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.

    2014-01-01

    Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855

  20. Updating of aversive memories after temporal error detection is differentially modulated by mTOR across development

    PubMed Central

    Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie

    2017-01-01

    The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats, prediction error detection and its associated protein synthesis-dependent reconsolidation can be triggered by reactivating the memory with the conditioned stimulus (CS), but without the unconditioned stimulus (US), or by presenting a CS–US pairing with a different CS–US interval than during the initial learning. Whether similar mechanisms underlie memory updating in the young is not known. Using similar paradigms with rapamycin (an mTORC1 inhibitor), we show that preweaning rats (PN18–20) do form a long-term memory of the CS–US interval, and detect a 10-sec versus 30-sec temporal prediction error. However, the resulting updating/reconsolidation processes become adult-like after adolescence (PN30–40). Our results thus show that while temporal prediction error detection exists in preweaning rats, specific infant-type mechanisms are at play for associative learning and memory. PMID:28202715

  1. Measurement-device-independent quantum key distribution with source state errors and statistical fluctuation

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2017-03-01

    We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.

  2. On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.

  3. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    PubMed

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  4. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do

    PubMed Central

    2017-01-01

    Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113

  5. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  6. Precision and Error of Three-dimensional Phenotypic Measures Acquired from 3dMD Photogrammetric Images

    PubMed Central

    Aldridge, Kristina; Boyadjiev, Simeon A.; Capone, George T.; DeLeon, Valerie B.; Richtsmeier, Joan T.

    2015-01-01

    The genetic basis for complex phenotypes is currently of great interest for both clinical investigators and basic scientists. In order to acquire a thorough understanding of the translation from genotype to phenotype, highly precise measures of phenotypic variation are required. New technologies, such as 3D photogrammetry are being implemented in phenotypic studies due to their ability to collect data rapidly and non-invasively. Before these systems can be broadly implemented the error associated with data collected from images acquired using these technologies must be assessed. This study investigates the precision, error, and repeatability associated with anthropometric landmark coordinate data collected from 3D digital photogrammetric images acquired with the 3dMDface System. Precision, error due to the imaging system, error due to digitization of the images, and repeatability are assessed in a sample of children and adults (N=15). Results show that data collected from images with the 3dMDface System are highly repeatable and precise. The average error associated with the placement of landmarks is sub-millimeter; both the error due to digitization and to the imaging system are very low. The few measures showing a higher degree of error include those crossing the labial fissure, which are influenced by even subtle movement of the mandible. These results suggest that 3D anthropometric data collected using the 3dMDface System are highly reliable and therefore useful for evaluation of clinical dysmorphology and surgery, analyses of genotype-phenotype correlations, and inheritance of complex phenotypes. PMID:16158436

  7. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  8. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  9. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    ERIC Educational Resources Information Center

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  10. Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications

    DTIC Science & Technology

    2010-09-01

    four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of

  11. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  12. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  13. Effects of sharing information on drug administration errors in pediatric wards: a pre–post intervention study

    PubMed Central

    Chua, Siew-Siang; Choo, Sim-Mei; Sulaiman, Che Zuraini; Omar, Asma; Thong, Meow-Keong

    2017-01-01

    Background and purpose Drug administration errors are more likely to reach the patient than other medication errors. The main aim of this study was to determine whether the sharing of information on drug administration errors among health care providers would reduce such problems. Patients and methods This study involved direct, undisguised observations of drug administrations in two pediatric wards of a major teaching hospital in Kuala Lumpur, Malaysia. This study consisted of two phases: Phase 1 (pre-intervention) and Phase 2 (post-intervention). Data were collected by two observers over a 40-day period in both Phase 1 and Phase 2 of the study. Both observers were pharmacy graduates: Observer 1 just completed her undergraduate pharmacy degree, whereas Observer 2 was doing her one-year internship as a provisionally registered pharmacist in the hospital under study. A drug administration error was defined as a discrepancy between the drug regimen received by the patient and that intended by the prescriber and also drug administration procedures that did not follow standard hospital policies and procedures. Results from Phase 1 of the study were analyzed, presented and discussed with the ward staff before commencement of data collection in Phase 2. Results A total of 1,284 and 1,401 doses of drugs were administered in Phase 1 and Phase 2, respectively. The rate of drug administration errors reduced significantly from Phase 1 to Phase 2 (44.3% versus 28.6%, respectively; P<0.001). Logistic regression analysis showed that the adjusted odds of drug administration errors in Phase 1 of the study were almost three times that in Phase 2 (P<0.001). The most common types of errors were incorrect administration technique and incorrect drug preparation. Nasogastric and intravenous routes of drug administration contributed significantly to the rate of drug administration errors. Conclusion This study showed that sharing of the types of errors that had occurred was significantly associated with a reduction in drug administration errors. PMID:28356748

  14. Composite Interval Mapping Based on Lattice Design for Error Control May Increase Power of Quantitative Trait Locus Detection.

    PubMed

    He, Jianbo; Li, Jijie; Huang, Zhongwen; Zhao, Tuanjie; Xing, Guangnan; Gai, Junyi; Guan, Rongzhan

    2015-01-01

    Experimental error control is very important in quantitative trait locus (QTL) mapping. Although numerous statistical methods have been developed for QTL mapping, a QTL detection model based on an appropriate experimental design that emphasizes error control has not been developed. Lattice design is very suitable for experiments with large sample sizes, which is usually required for accurate mapping of quantitative traits. However, the lack of a QTL mapping method based on lattice design dictates that the arithmetic mean or adjusted mean of each line of observations in the lattice design had to be used as a response variable, resulting in low QTL detection power. As an improvement, we developed a QTL mapping method termed composite interval mapping based on lattice design (CIMLD). In the lattice design, experimental errors are decomposed into random errors and block-within-replication errors. Four levels of block-within-replication errors were simulated to show the power of QTL detection under different error controls. The simulation results showed that the arithmetic mean method, which is equivalent to a method under random complete block design (RCBD), was very sensitive to the size of the block variance and with the increase of block variance, the power of QTL detection decreased from 51.3% to 9.4%. In contrast to the RCBD method, the power of CIMLD and the adjusted mean method did not change for different block variances. The CIMLD method showed 1.2- to 7.6-fold higher power of QTL detection than the arithmetic or adjusted mean methods. Our proposed method was applied to real soybean (Glycine max) data as an example and 10 QTLs for biomass were identified that explained 65.87% of the phenotypic variation, while only three and two QTLs were identified by arithmetic and adjusted mean methods, respectively.

  15. An empirical model for estimating solar radiation in the Algerian Sahara

    NASA Astrophysics Data System (ADS)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  16. A Phonological Exploration of Oral Reading Errors.

    ERIC Educational Resources Information Center

    Moscicki, Eve K.; Tallal, Paula

    1981-01-01

    Presents study exploring oral reading errors of normally developing readers to determine any developmental differences in learning phoneme-grapheme units; to discover if the grapheme representations of some phonemes are more difficult to read than others; and to replicate results reported by Fowler, et. al. Findings show most oral reading errors…

  17. Application of CCG Sensors to a High-Temperature Structure Subjected to Thermo-Mechanical Load.

    PubMed

    Xie, Weihua; Meng, Songhe; Jin, Hua; Du, Chong; Wang, Libin; Peng, Tao; Scarpa, Fabrizio; Xu, Chenghai

    2016-10-13

    This paper presents a simple methodology to perform a high temperature coupled thermo-mechanical test using ultra-high temperature ceramic material specimens (UHTCs), which are equipped with chemical composition gratings sensors (CCGs). The methodology also considers the presence of coupled loading within the response provided by the CCG sensors. The theoretical strain of the UHTCs specimens calculated with this technique shows a maximum relative error of 2.15% between the analytical and experimental data. To further verify the validity of the results from the tests, a Finite Element (FE) model has been developed to simulate the temperature, stress and strain fields within the UHTC structure equipped with the CCG. The results show that the compressive stress exceeds the material strength at the bonding area, and this originates a failure by fracture of the supporting structure in the hot environment. The results related to the strain fields show that the relative error with the experimental data decrease with an increase of temperature. The relative error is less than 15% when the temperature is higher than 200 °C, and only 6.71% at 695 °C.

  18. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  19. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  20. Safety evaluation of driver cognitive failures and driving errors on right-turn filtering movement at signalized road intersections based on Fuzzy Cellular Automata (FCA) model.

    PubMed

    Chai, Chen; Wong, Yiik Diew; Wang, Xuesong

    2017-07-01

    This paper proposes a simulation-based approach to estimate safety impact of driver cognitive failures and driving errors. Fuzzy Logic, which involves linguistic terms and uncertainty, is incorporated with Cellular Automata model to simulate decision-making process of right-turn filtering movement at signalized intersections. Simulation experiments are conducted to estimate the relationships between cognitive failures and driving errors with safety performance. Simulation results show Different types of cognitive failures are found to have varied relationship with driving errors and safety performance. For right-turn filtering movement, cognitive failures are more likely to result in driving errors with denser conflicting traffic stream. Moreover, different driving errors are found to have different safety impacts. The study serves to provide a novel approach to linguistically assess cognitions and replicate decision-making procedures of the individual driver. Compare to crash analysis, the proposed FCA model allows quantitative estimation of particular cognitive failures, and the impact of cognitions on driving errors and safety performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Q-adjusting technique applied to vertical deflections estimation in a single-axis rotation INS/GPS integrated system

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Wang, Xingshu; Wang, Jun; Dai, Dongkai; Xiong, Hao

    2016-10-01

    Former studies have proved that the attitude error in a single-axis rotation INS/GPS integrated system tracks the high frequency component of the deflections of the vertical (DOV) with a fixed delay and tracking error. This paper analyses the influence of the nominal process noise covariance matrix Q on the tracking error as well as the response delay, and proposed a Q-adjusting technique to obtain the attitude error which can track the DOV better. Simulation results show that different settings of Q lead to different response delay and tracking error; there exists optimal Q which leads to a minimum tracking error and a comparatively short response delay; for systems with different accuracy, different Q-adjusting strategy should be adopted. In this way, the DOV estimation accuracy of using the attitude error as the observation can be improved. According to the simulation results, the DOV estimation accuracy after using the Q-adjusting technique is improved by approximate 23% and 33% respectively compared to that of the Earth Model EGM2008 and the direct attitude difference method.

  2. Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.

    PubMed

    Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

    2013-08-01

    Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.

  3. Performance Analysis of Amplify-and-Forward Relaying FSO/SC-QAM Systems over Weak Turbulence Channels and Pointing Error Impairments

    NASA Astrophysics Data System (ADS)

    Trung, Ha Duyen

    2017-12-01

    In this paper, the end-to-end performance of free-space optical (FSO) communication system combining with Amplify-and-Forward (AF)-assisted or fixed-gain relaying technology using subcarrier quadrature amplitude modulation (SC-QAM) over weak atmospheric turbulence channels modeled by log-normal distribution with pointing error impairments is studied. More specifically, unlike previous studies on AF relaying FSO communication systems without pointing error effects; the pointing error effect is studied by taking into account the influence of beamwidth, aperture size and jitter variance. In addition, a combination of these models to analyze the combined effect of atmospheric turbulence and pointing error to AF relaying FSO/SC-QAM systems is used. Finally, an analytical expression is derived to evaluate the average symbol error rate (ASER) performance of such systems. The numerical results show that the impact of pointing error on the performance of AF relaying FSO/SC-QAM systems and how we use proper values of aperture size and beamwidth to improve the performance of such systems. Some analytical results are confirmed by Monte-Carlo simulations.

  4. [The quality of medication orders--can it be improved?].

    PubMed

    Vaknin, Ofra; Wingart-Emerel, Efrat; Stern, Zvi

    2003-07-01

    Medication errors are a common cause of morbidity and mortality among patients. Medication administration in hospitals is a complicated procedure with the possibility of error at each step. Errors are most commonly found at the prescription and transcription stages, although it is known that most errors can easily be avoided through strict adherence to standardized procedure guidelines. In examination of medication errors reported in the hospital in the year 2000, we found that 38% reported to have resulted from transcription errors. In the year 2001, the hospital initiated a program designed to identify faulty process of orders in an effort to improve the quality and effectiveness of the medication administration process. As part of this program, it was decided to check and evaluate the quality of the written doctor's orders and the transcription of those orders to the nursing cadre, in various hospital units. The study was conducted using a questionnaire which checked compliance to hospital standards with regard to the medication administration process, as applied to 6 units over the course of 8 weeks. Results of the survey showed poor compliance to guidelines on the part of doctors and nurses. Only 18% of doctors' orders in the study and 37% of the nurses' transcriptions were written according to standards. The Emergency Department showed an even lower compliance with only 3% of doctors' orders and 25% of nurses' transcriptions complying to standards. As a result of this study, it was decided to initiate an intensive in-service teaching course to refresh the staff's knowledge of medication administration guidelines. In the future it is recommended that hand-written orders be replaced by computerized orders in an effort to limit the chance of error.

  5. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  6. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Dromgoole, L; Alvarez, P

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less

  7. Suppression of Striatal Prediction Errors by the Prefrontal Cortex in Placebo Hypoalgesia.

    PubMed

    Schenk, Lieven A; Sprenger, Christian; Onat, Selim; Colloca, Luana; Büchel, Christian

    2017-10-04

    Classical learning theories predict extinction after the discontinuation of reinforcement through prediction errors. However, placebo hypoalgesia, although mediated by associative learning, has been shown to be resistant to extinction. We tested the hypothesis that this is mediated by the suppression of prediction error processing through the prefrontal cortex (PFC). We compared pain modulation through treatment cues (placebo hypoalgesia, treatment context) with pain modulation through stimulus intensity cues (stimulus context) during functional magnetic resonance imaging in 48 male and female healthy volunteers. During acquisition, our data show that expectations are correctly learned and that this is associated with prediction error signals in the ventral striatum (VS) in both contexts. However, in the nonreinforced test phase, pain modulation and expectations of pain relief persisted to a larger degree in the treatment context, indicating that the expectations were not correctly updated in the treatment context. Consistently, we observed significantly stronger neural prediction error signals in the VS in the stimulus context compared with the treatment context. A connectivity analysis revealed negative coupling between the anterior PFC and the VS in the treatment context, suggesting that the PFC can suppress the expression of prediction errors in the VS. Consistent with this, a participant's conceptual views and beliefs about treatments influenced the pain modulation only in the treatment context. Our results indicate that in placebo hypoalgesia contextual treatment information engages prefrontal conceptual processes, which can suppress prediction error processing in the VS and lead to reduced updating of treatment expectancies, resulting in less extinction of placebo hypoalgesia. SIGNIFICANCE STATEMENT In aversive and appetitive reinforcement learning, learned effects show extinction when reinforcement is discontinued. This is thought to be mediated by prediction errors (i.e., the difference between expectations and outcome). Although reinforcement learning has been central in explaining placebo hypoalgesia, placebo hypoalgesic effects show little extinction and persist after the discontinuation of reinforcement. Our results support the idea that conceptual treatment beliefs bias the neural processing of expectations in a treatment context compared with a more stimulus-driven processing of expectations with stimulus intensity cues. We provide evidence that this is associated with the suppression of prediction error processing in the ventral striatum by the prefrontal cortex. This provides a neural basis for persisting effects in reinforcement learning and placebo hypoalgesia. Copyright © 2017 the authors 0270-6474/17/379715-09$15.00/0.

  8. Monetary Incentives in Speeded Perceptual Decision: Effects of Penalizing Errors Versus Slow Responses

    PubMed Central

    Dambacher, Michael; Hübner, Ronald; Schlösser, Jan

    2011-01-01

    The influence of monetary incentives on performance has been widely investigated among various disciplines. While the results reveal positive incentive effects only under specific conditions, the exact nature, and the contribution of mediating factors are largely unexplored. The present study examined influences of payoff schemes as one of these factors. In particular, we manipulated penalties for errors and slow responses in a speeded categorization task. The data show improved performance for monetary over symbolic incentives when (a) penalties are higher for slow responses than for errors, and (b) neither slow responses nor errors are punished. Conversely, payoff schemes with stronger punishment for errors than for slow responses resulted in worse performance under monetary incentives. The findings suggest that an emphasis of speed is favorable for positive influences of monetary incentives, whereas an emphasis of accuracy under time pressure has the opposite effect. PMID:21980316

  9. The hypercorrection effect in younger and older adults.

    PubMed

    Eich, Teal S; Stern, Yaakov; Metcalfe, Janet

    2013-01-01

    ABSTRACT The hypercorrection effect, which refers to the finding that errors committed with high confidence are more likely to be corrected than are low confidence errors, has been replicated many times, and with both young adults and children. In the present study, we contrasted older with younger adults. Participants answered general-information questions, made confidence ratings about their answers, were given corrective feedback, and then were retested on questions that they had gotten wrong. While younger adults showed the hypercorrection effect, older adults, despite higher overall accuracy on the general-information questions and excellent basic metacognitive ability, showed a diminished hypercorrection effect. Indeed, the correspondence between their confidence in their errors and the probability of correction was not significantly greater than zero, showing, for the first time, that a particular participant population is selectively impaired on this error correction task. These results potentially offer leverage both on the mechanisms underlying the hypercorrection effect and on reasons for older adults' memory impairments, as well as on memory functions that are spared.

  10. [Study on Differential Optical Absorption Spectroscopy Data Processing Based on Chirp-Z Transformation].

    PubMed

    Zheng, Hai-ming; Li, Guang-jie; Wu, Hao

    2015-06-01

    Differential optical absorption spectroscopy (DOAS) is a commonly used atmospheric pollution monitoring method. Denoising of monitoring spectral data will improve the inversion accuracy. Fourier transform filtering method is effectively capable of filtering out the noise in the spectral data. But the algorithm itself can introduce errors. In this paper, a chirp-z transform method is put forward. By means of the local thinning of Fourier transform spectrum, it can retain the denoising effect of Fourier transform and compensate the error of the algorithm, which will further improve the inversion accuracy. The paper study on the concentration retrieving of SO2 and NO2. The results show that simple division causes bigger error and is not very stable. Chirp-z transform is proved to be more accurate than Fourier transform. Results of the frequency spectrum analysis show that Fourier transform cannot solve the distortion and weakening problems of characteristic absorption spectrum. Chirp-z transform shows ability in fine refactoring of specific frequency spectrum.

  11. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  12. Research on error control and compensation in magnetorheological finishing.

    PubMed

    Dai, Yifan; Hu, Hao; Peng, Xiaoqiang; Wang, Jianmin; Shi, Feng

    2011-07-01

    Although magnetorheological finishing (MRF) is a deterministic finishing technology, the machining results always fall short of simulation precision in the actual process, and it cannot meet the precision requirements just through a single treatment but after several iterations. We investigate the reasons for this problem through simulations and experiments. Through controlling and compensating the chief errors in the manufacturing procedure, such as removal function calculation error, positioning error of the removal function, and dynamic performance limitation of the CNC machine, the residual error convergence ratio (ratio of figure error before and after processing) in a single process is obviously increased, and higher figure precision is achieved. Finally, an improved technical process is presented based on these researches, and the verification experiment is accomplished on the experimental device we developed. The part is a circular plane mirror of fused silica material, and the surface figure error is improved from the initial λ/5 [peak-to-valley (PV) λ=632.8 nm], λ/30 [root-mean-square (rms)] to the final λ/40 (PV), λ/330 (rms) just through one iteration in 4.4 min. Results show that a higher convergence ratio and processing precision can be obtained by adopting error control and compensation techniques in MRF.

  13. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning

    PubMed Central

    Deng, Zhongliang

    2018-01-01

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization. PMID:29361718

  14. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.

    PubMed

    Deng, Zhongliang; Fu, Xiao; Wang, Hanhua

    2018-01-20

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  15. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  16. ‘Why should I care?’ Challenging free will attenuates neural reaction to errors

    PubMed Central

    Pourtois, Gilles; Brass, Marcel

    2015-01-01

    Whether human beings have free will has been a philosophical question for centuries. The debate about free will has recently entered the public arena through mass media and newspaper articles commenting on scientific findings that leave little to no room for free will. Previous research has shown that encouraging such a deterministic perspective influences behavior, namely by promoting cursory and antisocial behavior. Here we propose that such behavioral changes may, at least partly, stem from a more basic neurocognitive process related to response monitoring, namely a reduced error detection mechanism. Our results show that the error-related negativity, a neural marker of error detection, was reduced in individuals led to disbelieve in free will. This finding shows that reducing the belief in free will has a specific impact on error detection mechanisms. More generally, it suggests that abstract beliefs about intentional control can influence basic and automatic processes related to action control. PMID:24795441

  17. Full-field stress determination in photoelasticity with phase shifting technique

    NASA Astrophysics Data System (ADS)

    Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng

    2018-04-01

    Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.

  18. A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.

    PubMed

    Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang

    2014-07-31

    In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.

  19. Transition year labeling error characterization study. [Kansas, Minnesota, Montana, North Dakota, South Dakota, and Oklahoma

    NASA Technical Reports Server (NTRS)

    Clinton, N. J. (Principal Investigator)

    1980-01-01

    Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.

  20. [Refractive errors in patients with cerebral palsy].

    PubMed

    Mrugacz, Małgorzata; Bandzul, Krzysztof; Kułak, Wojciech; Poppe, Ewa; Jurowski, Piotr

    2013-04-01

    Ocular changes are common in patients with cerebral palsy (CP) and they exist in about 50% of cases. The most common are refractive errors and strabismus disease. The aim of the paper was to estimate the relativeness between refractive errors and neurological pathologies in patients with selected types of CP. MATERIAL AND METHODS. The subject of the analysis was showing refractive errors in patients within two groups of CP: diplegia spastica and tetraparesis, with nervous system pathologies taken into account. Results. This study was proven some correlations between refractive errors and type of CP and severity of the CP classified in GMFCS scale. Refractive errors were more common in patients with tetraparesis than with diplegia spastica. In the group with diplegia spastica more common were myopia and astigmatism, however in tetraparesis - hyperopia.

  1. Theory and compensation method of axial magnetic error induced by axial magnetic field in a polarization-maintaining fiber optic gyro

    NASA Astrophysics Data System (ADS)

    Zhou, Yanru; Zhao, Yuxiang; Tian, Hui; Zhang, Dengwei; Huang, Tengchao; Miao, Lijun; Shu, Xiaowu; Che, Shuangliang; Liu, Cheng

    2016-12-01

    In an axial magnetic field (AMF), which is vertical to the plane of the fiber coil, a polarization-maintaining fiber optic gyro (PM-FOG) appears as an axial magnetic error. This error is linearly related to the intensity of an AMF, the radius of the fiber coil, and the light wavelength, and also influenced by the distribution of fiber twist. When a PM-FOG is manufactured completely, this error only appears a linear correlation with the AMF. A real-time compensation model is established to eliminate the error, and the experimental results show that the axial magnetic error of the PM-FOG is decreased from 5.83 to 0.09 deg/h in 12G AMF with 18-dB suppression.

  2. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  4. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  5. Error Cost Escalation Through the Project Life Cycle

    NASA Technical Reports Server (NTRS)

    Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory

    2004-01-01

    It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units

  6. "Fragment errors" in deep dysgraphia: further support for a lexical hypothesis.

    PubMed

    Bormann, Tobias; Wallesch, Claus-W; Blanken, Gerhard

    2008-07-01

    In addition to various lexical errors, the writing of patients with deep dysgraphia may include a large number of segmental spelling errors, which increase towards the end of the word. Frequently, these errors involve deletion of two or more letters resulting in so-called "fragment errors". Different positions have been brought forward regarding their origin, including rapid decay of activation in the graphemic buffer and an impairment of more central (i.e., lexical or semantic) processing. We present data from a patient (M.D.) with deep dysgraphia who showed an increase of segmental spelling errors towards the end of the word. Several tasks were carried out to explore M.D.'s underlying functional impairment. Errors affected word-final positions in tasks like backward spelling and fragment completion. In a delayed copying task, length of the delay had no influence. In addition, when asked to recall three serially presented letters, a task which had not been carried out before, M.D. exhibited a preference for the first and the third letter and poor performance for the second letter. M.D.'s performance on these tasks contradicts the rapid decay account and instead supports a lexical-semantic account of segmental errors in deep dysgraphia. In addition, the results fit well with an implemented computational model of deep dysgraphia and segmental spelling errors.

  7. Effects of Optical Combiner and IPD Change for Convergence on Near-Field Depth Perception in an Optical See-Through HMD.

    PubMed

    Lee, Sangyoon; Hu, Xinda; Hua, Hong

    2016-05-01

    Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.

  8. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  9. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  10. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback

    PubMed Central

    Lee, Jackson C.; Mittelman, Talia; Stepp, Cara E.; Bohland, Jason W.

    2017-01-01

    Purpose Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Method Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. Results New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. Conclusions This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. Supplemental Material https://doi.org/10.23641/asha.5103067 PMID:28655038

  11. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  12. Evaluating the Effective Factors for Reporting Medical Errors among Midwives Working at Teaching Hospitals Affiliated to Isfahan University of Medical Sciences.

    PubMed

    Khorasani, Fahimeh; Beigi, Marjan

    2017-01-01

    Recently, evaluation and accreditation system of hospitals has had a special emphasis on reporting malpractices and sharing errors or lessons learnt from errors, but still due to lack of promotion of systematic approach for solving problems from the same system, this issue has remained unattended. This study was conducted to determine the effective factors for reporting medical errors among midwives. This project was a descriptive cross-sectional observational study. Data gathering tools were a standard checklist and two researcher-made questionnaires. Sampling for this study was conducted from all the midwives who worked at teaching hospitals affiliated to Isfahan University of Medical Sciences through census method (convenient) and lasted for 3 months. Data were analyzed using descriptive and inferential statistics through SPSS 16. Results showed that 79.1% of the staff reported errors and the highest rate of errors was in the process of patients' tests. In this study, the mean score of midwives' knowledge about the errors was 79.1 and the mean score of their attitude toward reporting errors was 70.4. There was a direct relation between the score of errors' knowledge and attitude in the midwifery staff and reporting errors. Based on the results of this study about the appropriate knowledge and attitude of midwifery staff regarding errors and action toward reporting them, it is recommended to strengthen the system when it comes to errors and hospitals risks.

  13. Error-compensation model for simultaneous measurement of five degrees of freedom motion errors of a rotary axis

    NASA Astrophysics Data System (ADS)

    Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin

    2018-07-01

    This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.

  14. Survey and Method for Determination of Trajectory Predictor Requirements

    NASA Technical Reports Server (NTRS)

    Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung

    2009-01-01

    A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result

  15. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  16. Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.

    PubMed

    Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian

    2014-03-01

    Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.

  17. The Role of Multimodel Combination in Improving Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Li, W.

    2008-12-01

    Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.

  18. GOSAT CO2 retrieval results using TANSO-CAI aerosol information over East Asia

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, W.; Jung, Y.; Lee, S.; Kim, J.; Lee, H.; Boesch, H.; Goo, T. Y.

    2015-12-01

    In the satellite remote sensing of CO2, incorrect aerosol information could induce large errors as previous studies suggested. Many factors, such as, aerosol type, wavelength dependency of AOD, aerosol polarization effect and etc. have been main error sources. Due to these aerosol effects, large number of data retrieved are screened out in quality control, or retrieval errors tend to increase if not screened out, especially in East Asia where aerosol concentrations are fairly high. To reduce these aerosol induced errors, a CO2 retrieval algorithm using the simultaneous TANSO-CAI aerosol information is developed. This algorithm adopts AOD and aerosol type information as a priori information from the CAI aerosol retrieval algorithm. The CO2 retrieval algorithm based on optimal estimation method and VLIDORT, a vector discrete ordinate radiative transfer model. The CO2 algorithm, developed with various state vectors to find accurate CO2 concentration, shows reasonable results when compared with other dataset. This study concentrates on the validation of retrieved results with the ground-based TCCON measurements in East Asia and the comparison with the previous retrieval from ACOS, NIES, and UoL. Although, the retrieved CO2 concentration is lower than previous results by ppm's, it shows similar trend and high correlation with previous results. Retrieved data and TCCON measurements data are compared at three stations of Tsukuba, Saga, Anmyeondo in East Asia, with the collocation criteria of ±2°in latitude/longitude and ±1 hours of GOSAT passing time. Compared results also show similar trend with good correlation. Based on the TCCON comparison results, bias correction equation is calculated and applied to the East Asia data.

  19. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  20. Efficient error correction for next-generation sequencing of viral amplicons

    PubMed Central

    2012-01-01

    Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430

  1. Student Distractor Choices on the Mathematics Virginia Standards of Learning Middle School Assessments

    ERIC Educational Resources Information Center

    Lewis, Virginia Vimpeny

    2011-01-01

    Number Concepts; Measurement; Geometry; Probability; Statistics; and Patterns, Functions and Algebra. Procedural Errors were further categorized into the following content categories: Computation; Measurement; Statistics; and Patterns, Functions, and Algebra. The results of the analysis showed the main sources of error for 6th, 7th, and 8th…

  2. Medial-Frontal Stimulation Enhances Learning in Schizophrenia by Restoring Prediction Error Signaling.

    PubMed

    Reinhart, Robert M G; Zhu, Julia; Park, Sohee; Woodman, Geoffrey F

    2015-09-02

    Posterror learning, associated with medial-frontal cortical recruitment in healthy subjects, is compromised in neuropsychiatric disorders. Here we report novel evidence for the mechanisms underlying learning dysfunctions in schizophrenia. We show that, by noninvasively passing direct current through human medial-frontal cortex, we could enhance the event-related potential related to learning from mistakes (i.e., the error-related negativity), a putative index of prediction error signaling in the brain. Following this causal manipulation of brain activity, the patients learned a new task at a rate that was indistinguishable from healthy individuals. Moreover, the severity of delusions interacted with the efficacy of the stimulation to improve learning. Our results demonstrate a causal link between disrupted prediction error signaling and inefficient learning in schizophrenia. These findings also demonstrate the feasibility of nonpharmacological interventions to address cognitive deficits in neuropsychiatric disorders. When there is a difference between what we expect to happen and what we actually experience, our brains generate a prediction error signal, so that we can map stimuli to responses and predict outcomes accurately. Theories of schizophrenia implicate abnormal prediction error signaling in the cognitive deficits of the disorder. Here, we combine noninvasive brain stimulation with large-scale electrophysiological recordings to establish a causal link between faulty prediction error signaling and learning deficits in schizophrenia. We show that it is possible to improve learning rate, as well as the neural signature of prediction error signaling, in patients to a level quantitatively indistinguishable from that of healthy subjects. The results provide mechanistic insight into schizophrenia pathophysiology and suggest a future therapy for this condition. Copyright © 2015 the authors 0270-6474/15/3512232-09$15.00/0.

  3. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese.

    PubMed

    Saito, Akie; Inoue, Tomoyoshi

    2017-06-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as "mora" is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.

  4. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  5. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  6. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  7. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  8. Star centroiding error compensation for intensified star sensors.

    PubMed

    Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun

    2016-12-26

    A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.

  9. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  10. Quantification and characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wood, Christopher J.; Gambetta, Jay M.

    2018-03-01

    We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.

  11. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Comparison of Procedures for Dual and Triple Closely Spaced Parallel Runways

    NASA Technical Reports Server (NTRS)

    Verma, Savita; Ballinger, Deborah; Subramanian Shobana; Kozon, Thomas

    2012-01-01

    A human-in-the-loop high fidelity flight simulation experiment was conducted, which investigated and compared breakout procedures for Very Closely Spaced Parallel Approaches (VCSPA) with two and three runways. To understand the feasibility, usability and human factors of two and three runway VCSPA, data were collected and analyzed on the dependent variables of breakout cross track error and pilot workload. Independent variables included number of runways, cause of breakout and location of breakout. Results indicated larger cross track error and higher workload using three runways as compared to 2-runway operations. Significant interaction effects involving breakout cause and breakout location were also observed. Across all conditions, cross track error values showed high levels of breakout trajectory accuracy and pilot workload remained manageable. Results suggest possible avenues of future adaptation for adopting these procedures (e.g., pilot training), while also showing potential promise of the concept.

  13. Color-motion feature-binding errors are mediated by a higher-order chromatic representation

    PubMed Central

    Shevell, Steven K.; Wang, Wei

    2017-01-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature 429, 262 (2004)]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A 31, A60 (2014)]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at every s level. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higherorder chromatic mechanism. PMID:26974945

  14. The Binding of Learning to Action in Motor Adaptation

    PubMed Central

    Gonzalez Castro, Luis Nicolas; Monsen, Craig Bryant; Smith, Maurice A.

    2011-01-01

    In motor tasks, errors between planned and actual movements generally result in adaptive changes which reduce the occurrence of similar errors in the future. It has commonly been assumed that the motor adaptation arising from an error occurring on a particular movement is specifically associated with the motion that was planned. Here we show that this is not the case. Instead, we demonstrate the binding of the adaptation arising from an error on a particular trial to the motion experienced on that same trial. The formation of this association means that future movements planned to resemble the motion experienced on a given trial benefit maximally from the adaptation arising from it. This reflects the idea that actual rather than planned motions are assigned ‘credit’ for motor errors because, in a computational sense, the maximal adaptive response would be associated with the condition credited with the error. We studied this process by examining the patterns of generalization associated with motor adaptation to novel dynamic environments during reaching arm movements in humans. We found that these patterns consistently matched those predicted by adaptation associated with the actual rather than the planned motion, with maximal generalization observed where actual motions were clustered. We followed up these findings by showing that a novel training procedure designed to leverage this newfound understanding of the binding of learning to action, can improve adaptation rates by greater than 50%. Our results provide a mechanistic framework for understanding the effects of partial assistance and error augmentation during neurologic rehabilitation, and they suggest ways to optimize their use. PMID:21731476

  15. Neural Correlates of User-initiated Motor Success and Failure - A Brain-Computer Interface Perspective.

    PubMed

    Yazmir, Boris; Reiner, Miriam

    2018-05-15

    Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Random errors of oceanic monthly rainfall derived from SSM/I using probability distribution functions

    NASA Technical Reports Server (NTRS)

    Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.

    1993-01-01

    Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.

  17. Fault tolerance with noisy and slow measurements and preparation.

    PubMed

    Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason

    2010-09-03

    It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.

  18. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  19. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  20. Orthographic recognition in late adolescents: an assessment through event-related brain potentials.

    PubMed

    González-Garrido, Andrés Antonio; Gómez-Velázquez, Fabiola Reveca; Rodríguez-Santillán, Elizabeth

    2014-04-01

    Reading speed and efficiency are achieved through the automatic recognition of written words. Difficulties in learning and recognizing the orthography of words can arise despite reiterative exposure to texts. This study aimed to investigate, in native Spanish-speaking late adolescents, how different levels of orthographic knowledge might result in behavioral and event-related brain potential differences during the recognition of orthographic errors. Forty-five healthy high school students were selected and divided into 3 equal groups (High, Medium, Low) according to their performance on a 5-test battery of orthographic knowledge. All participants performed an orthographic recognition task consisting of the sequential presentation of a picture (object, fruit, or animal) followed by a correctly, or incorrectly, written word (orthographic mismatch) that named the picture just shown. Electroencephalogram (EEG) recording took place simultaneously. Behavioral results showed that the Low group had a significantly lower number of correct responses and increased reaction times while processing orthographical errors. Tests showed significant positive correlations between higher performance on the experimental task and faster and more accurate reading. The P150 and P450 components showed higher voltages in the High group when processing orthographic errors, whereas N170 seemed less lateralized to the left hemisphere in the lower orthographic performers. Also, trials with orthographic errors elicited a frontal P450 component that was only evident in the High group. The present results show that higher levels of orthographic knowledge correlate with high reading performance, likely because of faster and more accurate perceptual processing, better visual orthographic representations, and top-down supervision, as the event-related brain potential findings seem to suggest.

  1. Active Control of Inlet Noise on the JT15D Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.

    1999-01-01

    This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.

  2. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  3. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  4. The Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-Laird method

    PubMed Central

    2014-01-01

    Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes. PMID:24548571

  5. Correcting AUC for Measurement Error.

    PubMed

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  6. Hierarchical learning induces two simultaneous, but separable, prediction errors in human basal ganglia.

    PubMed

    Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael

    2013-03-27

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.

  7. Medication prescribing errors in the medical intensive care unit of Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia.

    PubMed

    Sada, Oumer; Melkie, Addisu; Shibeshi, Workineh

    2015-09-16

    Medication errors (MEs) are important problems in all hospitalized populations, especially in intensive care unit (ICU). Little is known about the prevalence of medication prescribing errors in the ICU of hospitals in Ethiopia. The aim of this study was to assess medication prescribing errors in the ICU of Tikur Anbessa Specialized Hospital using retrospective cross-sectional analysis of patient cards and medication charts. About 220 patient charts were reviewed with a total of 1311 patient-days, and 882 prescription episodes. 359 MEs were detected; with prevalence of 40 per 100 orders. Common prescribing errors were omission errors 154 (42.89%), 101 (28.13%) wrong combination, 48 (13.37%) wrong abbreviation, 30 (8.36%) wrong dose, wrong frequency 18 (5.01%) and wrong indications 8 (2.23%). The present study shows that medication errors are common in medical ICU of Tikur Anbessa Specialized Hospital. These results suggest future targets of prevention strategies to reduce the rate of medication error.

  8. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  9. Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995

    NASA Technical Reports Server (NTRS)

    Blerman, Gregory S.

    1995-01-01

    Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.

  10. Human errors and occupational injuries of older female workers in the residential healthcare facilities for the elderly.

    PubMed

    Kim, Jun Sik; Jeong, Byung Yong

    2018-05-03

    The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.

  11. Calibration method of microgrid polarimeters with image interpolation.

    PubMed

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  12. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  13. [Comparison of image distortion between three magnetic resonance imaging systems of different magnetic field strengths for use in stereotactic irradiation of brain].

    PubMed

    Takemura, Akihiro; Sasamoto, Kouhei; Nakamura, Kaori; Kuroda, Tatsunori; Shoji, Saori; Matsuura, Yukihiro; Matsushita, Tatsuhiko

    2013-06-01

    In this study, we evaluated the image distortion of three magnetic resonance imaging (MRI) systems with magnetic field strengths of 0.4 T, 1.5 T and 3 T, during stereotactic irradiation of the brain. A quality assurance phantom for MRI image distortion in radiosurgery was used for these measurements of image distortion. Images were obtained from a 0.4-T MRI (APERTO Eterna, HITACHI), a 1.5-T MRI (Signa HDxt, GE Healthcare) and a 3-T MRI (Signa HDx 3.0 T, GE Healthcare) system. Imaging sequences for the 0.4-T and 3-T MRI were based on the 1.5-T MRI sequence used for stereotactic irradiation in the clinical setting. The same phantom was scanned using a computed tomography (CT) system (Aquilion L/B, Toshiba) as the standard. The results showed mean errors in the Z direction to be the least satisfactory of all the directions in all results. The mean error in the Z direction for 1.5-T MRI at -110 mm in the axial plane showed the largest error of 4.0 mm. The maximum errors for the 0.4-T and 3-T MRI were 1.7 mm and 2.8 mm, respectively. The errors in the plane were not uniform and did not show linearity, suggesting that simple distortion correction using outside markers is unlikely to be effective. The 0.4-T MRI showed the lowest image distortion of the three. However, other items, such as image noise, contrast and study duration need to be evaluated in MRI systems when applying frameless stereotactic irradiation.

  14. Acetaminophen attenuates error evaluation in cortex

    PubMed Central

    Kam, Julia W.Y.; Heine, Steven J.; Inzlicht, Michael; Handy, Todd C.

    2016-01-01

    Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants’ ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual’s Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. PMID:26892161

  15. Suboptimal schemes for atmospheric data assimilation based on the Kalman filter

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Cohn, Stephen E.

    1994-01-01

    This work is directed toward approximating the evolution of forecast error covariances for data assimilation. The performance of different algorithms based on simplification of the standard Kalman filter (KF) is studied. These are suboptimal schemes (SOSs) when compared to the KF, which is optimal for linear problems with known statistics. The SOSs considered here are several versions of optimal interpolation (OI), a scheme for height error variance advection, and a simplified KF in which the full height error covariance is advected. To employ a methodology for exact comparison among these schemes, a linear environment is maintained, in which a beta-plane shallow-water model linearized about a constant zonal flow is chosen for the test-bed dynamics. The results show that constructing dynamically balanced forecast error covariances rather than using conventional geostrophically balanced ones is essential for successful performance of any SOS. A posteriori initialization of SOSs to compensate for model - data imbalance sometimes results in poor performance. Instead, properly constructed dynamically balanced forecast error covariances eliminate the need for initialization. When the SOSs studied here make use of dynamically balanced forecast error covariances, the difference among their performances progresses naturally from conventional OI to the KF. In fact, the results suggest that even modest enhancements of OI, such as including an approximate dynamical equation for height error variances while leaving height error correlation structure homogeneous, go a long way toward achieving the performance of the KF, provided that dynamically balanced cross-covariances are constructed and that model errors are accounted for properly. The results indicate that such enhancements are necessary if unconventional data are to have a positive impact.

  16. Intensive Treatment with Ultrasound Visual Feedback for Speech Sound Errors in Childhood Apraxia

    PubMed Central

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2016-01-01

    Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients with additional knowledge about their tongue shapes when attempting to produce sounds that are erroneous. The additional feedback may assist children with childhood apraxia of speech (CAS) in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice) may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10–14 years diagnosed with CAS attended 16 h of speech therapy over a 2-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor patterns and improved articulation of sounds previously in error, with varying levels of generalization and retention. PMID:27625603

  17. Sensitivity to prediction error in reach adaptation

    PubMed Central

    Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza

    2012-01-01

    It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782

  18. Are phonological influences on lexical (mis)selection the result of a monitoring bias?

    PubMed Central

    Ratinckx, Elie; Ferreira, Victor S.; Hartsuiker, Robert J.

    2009-01-01

    A monitoring bias account is often used to explain speech error patterns that seem to be the result of an interactive language production system, like phonological influences on lexical selection errors. A biased monitor is suggested to detect and covertly correct certain errors more often than others. For instance, this account predicts that errors which are phonologically similar to intended words are harder to detect than ones that are phonologically dissimilar. To test this, we tried to elicit phonological errors under the same conditions that show other kinds of lexical selection errors. In five experiments, we presented participants with high cloze probability sentence fragments followed by a picture that was either semantically related, a homophone of a semantically related word, or phonologically related to the (implicit) last word of the sentence. All experiments elicited semantic completions or homophones of semantic completions, but none elicited phonological completions. This finding is hard to reconcile with a monitoring bias account and is better explained with an interactive production system. Additionally, this finding constrains the amount of bottom-up information flow in interactive models. PMID:18942035

  19. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  20. Separate Medication Preparation Rooms Reduce Interruptions and Medication Errors in the Hospital Setting: A Prospective Observational Study.

    PubMed

    Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja

    2016-12-21

    Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.

  1. An electrophysiological signal that precisely tracks the emergence of error awareness

    PubMed Central

    Murphy, Peter R.; Robertson, Ian H.; Allen, Darren; Hester, Robert; O'Connell, Redmond G.

    2012-01-01

    Recent electrophysiological research has sought to elucidate the neural mechanisms necessary for the conscious awareness of action errors. Much of this work has focused on the error positivity (Pe), a neural signal that is specifically elicited by errors that have been consciously perceived. While awareness appears to be an essential prerequisite for eliciting the Pe, the precise functional role of this component has not been identified. Twenty-nine participants performed a novel variant of the Go/No-go Error Awareness Task (EAT) in which awareness of commission errors was indicated via a separate speeded manual response. Independent component analysis (ICA) was used to isolate the Pe from other stimulus- and response-evoked signals. Single-trial analysis revealed that Pe peak latency was highly correlated with the latency at which awareness was indicated. Furthermore, the Pe was more closely related to the timing of awareness than it was to the initial erroneous response. This finding was confirmed in a separate study which derived IC weights from a control condition in which no indication of awareness was required, thus ruling out motor confounds. A receiver-operating-characteristic (ROC) curve analysis showed that the Pe could reliably predict whether an error would be consciously perceived up to 400 ms before the average awareness response. Finally, Pe latency and amplitude were found to be significantly correlated with overall error awareness levels between subjects. Our data show for the first time that the temporal dynamics of the Pe trace the emergence of error awareness. These findings have important implications for interpreting the results of clinical EEG studies of error processing. PMID:22470332

  2. Differential Effects of Incentives on Response Error, Response Rate, and Reliability of a Mailed Questionnaire.

    ERIC Educational Resources Information Center

    Brown, Darine F.; Hartman, Bruce

    1980-01-01

    Investigated issues associated with stimulating increased return rates to a mail questionnaire among school counselors. Results show that as the number of incentives received increased, the return rates increased in a linear fashion. The incentives did not introduce response error or affect the reliability of the Counselor Function Inventory.…

  3. Analysis of Errors Made by Students Solving Genetics Problems.

    ERIC Educational Resources Information Center

    Costello, Sandra Judith

    The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…

  4. Secondary School Teachers' Pedagogical Content Knowledge of Some Common Student Errors and Misconceptions in Sets

    ERIC Educational Resources Information Center

    Kolitsoe Moru, Eunice; Qhobela, Makomosela

    2013-01-01

    The study investigated teachers' pedagogical content knowledge of common students' errors and misconceptions in sets. Five mathematics teachers from one Lesotho secondary school were the sample of the study. Questionnaires and interviews were used for data collection. The results show that teachers were able to identify the following students'…

  5. Effects of Feedback on the Vigilance Task Performance of Hyperactive and Hypoactive Children.

    ERIC Educational Resources Information Center

    Ozolins, Delmar A.; Anderson, Robert P.

    1980-01-01

    The effects of feedback on the approaches of 20 hyperactive and 20 hypoactive children (ages 6 to 10) to a vigilance task were studied. Results showed that hyperactive Ss had more errors than hypoactive Ss under the feedback for correct responses condition and fewer errors under the feedback for false alarms condition. (PHR)

  6. When linearity prevails over hierarchy in syntax

    PubMed Central

    Willer Gold, Jana; Arsenijević, Boban; Batinić, Mia; Becker, Michael; Čordalija, Nermina; Kresić, Marijana; Leko, Nedžad; Marušič, Franc Lanko; Milićev, Tanja; Milićević, Nataša; Mitić, Ivana; Peti-Stantić, Anita; Stanković, Branimir; Šuligoj, Tina; Tušek, Jelena; Nevins, Andrew

    2018-01-01

    Hierarchical structure has been cherished as a grammatical universal. We use experimental methods to show where linear order is also a relevant syntactic relation. An identical methodology and design were used across six research sites on South Slavic languages. Experimental results show that in certain configurations, grammatical production can in fact favor linear order over hierarchical structure. However, these findings are limited to coordinate structures and distinct from the kind of production errors found with comparable configurations such as “attraction” errors. The results demonstrate that agreement morphology may be computed in a series of steps, one of which is partly independent from syntactic hierarchy. PMID:29288218

  7. Application of CCG Sensors to a High-Temperature Structure Subjected to Thermo-Mechanical Load

    PubMed Central

    Xie, Weihua; Meng, Songhe; Jin, Hua; Du, Chong; Wang, Libin; Peng, Tao; Scarpa, Fabrizio; Xu, Chenghai

    2016-01-01

    This paper presents a simple methodology to perform a high temperature coupled thermo-mechanical test using ultra-high temperature ceramic material specimens (UHTCs), which are equipped with chemical composition gratings sensors (CCGs). The methodology also considers the presence of coupled loading within the response provided by the CCG sensors. The theoretical strain of the UHTCs specimens calculated with this technique shows a maximum relative error of 2.15% between the analytical and experimental data. To further verify the validity of the results from the tests, a Finite Element (FE) model has been developed to simulate the temperature, stress and strain fields within the UHTC structure equipped with the CCG. The results show that the compressive stress exceeds the material strength at the bonding area, and this originates a failure by fracture of the supporting structure in the hot environment. The results related to the strain fields show that the relative error with the experimental data decrease with an increase of temperature. The relative error is less than 15% when the temperature is higher than 200 °C, and only 6.71% at 695 °C. PMID:27754356

  8. Hyperbolic Positioning with Antenna Arrays and Multi-Channel Pseudolite for Indoor Localization

    PubMed Central

    Fujii, Kenjirou; Sakamoto, Yoshihiro; Wang, Wei; Arie, Hiroaki; Schmitz, Alexander; Sugano, Shigeki

    2015-01-01

    A hyperbolic positioning method with antenna arrays consisting of proximately-located antennas and a multi-channel pseudolite is proposed in order to overcome the problems of indoor positioning with conventional pseudolites (ground-based GPS transmitters). A two-dimensional positioning experiment using actual devices is conducted. The experimental result shows that the positioning accuracy varies centimeter- to meter-level according to the geometric relation between the pseudolite antennas and the receiver. It also shows that the bias error of the carrier-phase difference observables is more serious than their random error. Based on the size of the bias error of carrier-phase difference that is inverse-calculated from the experimental result, three-dimensional positioning performance is evaluated by computer simulation. In addition, in the three-dimensional positioning scenario, an initial value convergence analysis of the non-linear least squares is conducted. Its result shows that initial values that can converge to a right position exist at least under the proposed antenna setup. The simulated values and evaluation methods introduced in this work can be applied to various antenna setups; therefore, by using them, positioning performance can be predicted in advance of installing an actual system. PMID:26437405

  9. Sex differences in the shoulder joint position sense acuity: a cross-sectional study.

    PubMed

    Vafadar, Amir K; Côté, Julie N; Archambault, Philippe S

    2015-09-30

    Work-related musculoskeletal disorders (WMSD) is the most expensive form of work disability. Female sex has been considered as an individual risk factor for the development of WMSD, specifically in the neck and shoulder region. One of the factors that might contribute to the higher injury rate in women is possible differences in neuromuscular control. Accordingly the purpose of this study was to estimate the effect of sex on shoulder joint position sense acuity (as a part of shoulder neuromuscular control) in healthy individuals. Twenty-eight healthy participants, 14 females and 14 males were recruited for this study. To test position sense acuity, subjects were asked to flex their dominant shoulder to one of the three pre-defined angle ranges (low, mid and high-ranges) with eyes closed, hold their arm in that position for three seconds, go back to the starting position and then immediately replicate the same joint flexion angle, while the difference between the reproduced and original angle was taken as the measure of position sense error. The errors were measured using Vicon motion capture system. Subjects reproduced nine positions in total (3 ranges × 3 trials each). Calculation of absolute repositioning error (magnitude of error) showed no significant difference between men and women (p-value ≥ 0.05). However, the analysis of the direction of error (constant error) showed a significant difference between the sexes, as women tended to mostly overestimate the target, whereas men tended to both overestimate and underestimate the target (p-value ≤ 0.01, observed power = 0.79). The results also showed that men had a significantly more variable error, indicating more variability in their position sense, compared to women (p-value ≤ 0.05, observed power = 0.78). Differences observed in the constant JPS error suggest that men and women might use different neuromuscular control strategies in the upper limb. In addition, higher JPS variability observed in men might be one of the factors that could contribute to their lower rate of musculoskeletal disorders, compared to women. The result of this study showed that shoulder position sense, as part of the neuromuscular control system, differs between men and women. This finding can help us better understand the reasons behind the higher rate of musculoskeletal disorders in women, especially in the working environments.

  10. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  11. Error management in blood establishments: results of eight years of experience (2003–2010) at the Croatian Institute of Transfusion Medicine

    PubMed Central

    Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena

    2012-01-01

    Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352

  12. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  13. Voxel-based statistical analysis of uncertainties associated with deformable image registration

    NASA Astrophysics Data System (ADS)

    Li, Shunshan; Glide-Hurst, Carri; Lu, Mei; Kim, Jinkoo; Wen, Ning; Adams, Jeffrey N.; Gordon, James; Chetty, Indrin J.; Zhong, Hualiang

    2013-09-01

    Deformable image registration (DIR) algorithms have inherent uncertainties in their displacement vector fields (DVFs).The purpose of this study is to develop an optimal metric to estimate DIR uncertainties. Six computational phantoms have been developed from the CT images of lung cancer patients using a finite element method (FEM). The FEM generated DVFs were used as a standard for registrations performed on each of these phantoms. A mechanics-based metric, unbalanced energy (UE), was developed to evaluate these registration DVFs. The potential correlation between UE and DIR errors was explored using multivariate analysis, and the results were validated by landmark approach and compared with two other error metrics: DVF inverse consistency (IC) and image intensity difference (ID). Landmark-based validation was performed using the POPI-model. The results show that the Pearson correlation coefficient between UE and DIR error is rUE-error = 0.50. This is higher than rIC-error = 0.29 for IC and DIR error and rID-error = 0.37 for ID and DIR error. The Pearson correlation coefficient between UE and the product of the DIR displacements and errors is rUE-error × DVF = 0.62 for the six patients and rUE-error × DVF = 0.73 for the POPI-model data. It has been demonstrated that UE has a strong correlation with DIR errors, and the UE metric outperforms the IC and ID metrics in estimating DIR uncertainties. The quantified UE metric can be a useful tool for adaptive treatment strategies, including probability-based adaptive treatment planning.

  14. Error Correcting Optical Mapping Data.

    PubMed

    Mukherjee, Kingshuk; Washimkar, Darshan; Muggli, Martin D; Salmela, Leena; Boucher, Christina

    2018-05-26

    Optical mapping is a unique system that is capable of producing high-resolution, high-throughput genomic map data that gives information about the structure of a genome [21]. Recently it has been used for scaffolding contigs and assembly validation for large-scale sequencing projects, including the maize [32], goat [6], and amborella [4] genomes. However, a major impediment in the use of this data is the variety and quantity of errors in the raw optical mapping data, which are called Rmaps. The challenges associated with using Rmap data are analogous to dealing with insertions and deletions in the alignment of long reads. Moreover, they are arguably harder to tackle since the data is numerical and susceptible to inaccuracy. We develop cOMET to error correct Rmap data, which to the best of our knowledge is the only optical mapping error correction method. Our experimental results demonstrate that cOMET has high prevision and corrects 82.49% of insertion errors and 77.38% of deletion errors in Rmap data generated from the E. coli K-12 reference genome. Out of the deletion errors corrected, 98.26% are true errors. Similarly, out of the insertion errors corrected, 82.19% are true errors. It also successfully scales to large genomes, improving the quality of 78% and 99% of the Rmaps in the plum and goat genomes, respectively. Lastly, we show the utility of error correction by demonstrating how it improves the assembly of Rmap data. Error corrected Rmap data results in an assembly that is more contiguous, and covers a larger fraction of the genome.

  15. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  16. Analysis of the “naming game” with learning errors in communications

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  17. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  18. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  19. Fault tolerance in an inner-outer solver: A GVR-enabled case study

    DOE PAGES

    Zhang, Ziming; Chien, Andrew A.; Teranishi, Keita

    2015-04-18

    Resilience is a major challenge for large-scale systems. It is particularly important for iterative linear solvers, since they take much of the time of many scientific applications. We show that single bit flip errors in the Flexible GMRES iterative linear solver can lead to high computational overhead or even failure to converge to the right answer. Informed by these results, we design and evaluate several strategies for fault tolerance in both inner and outer solvers appropriate across a range of error rates. We implement them, extending Trilinos’ solver library with the Global View Resilience (GVR) programming model, which provides multi-streammore » snapshots, multi-version data structures with portable and rich error checking/recovery. Lastly, experimental results validate correct execution with low performance overhead under varied error conditions.« less

  20. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  1. Writing abilities in intellectual disabilities: a comparison between Down and Williams syndrome.

    PubMed

    Varuzza, Cristiana; De Rose, Paola; Vicari, Stefano; Menghini, Deny

    2015-02-01

    Writing is a complex task that requires the integration of multiple cognitive, linguistic, and motor abilities. Until now, only a few studies investigated writing abilities in individuals with Intellectual Disability (ID). The aim of the present exploratory study was to provide knowledge on the organization of writing in two populations with ID, Down syndrome (DS) and Williams syndrome (WS), trying to disentangle different components of the process. A battery tapping diverse writing demands as low-level transcription skills as well as high-level writing skills was proposed to 13 individuals with WS, 12 individuals with DS and 11 mental-age-matched typically developing (TD) children. Results showed that the two groups with genetic syndromes did not differ from TD in writing a list of objects placed in bedroom, in the number of errors in the text composition, in a text copying task and in kind of errors made. However, in a word dictation task, individuals with DS made more errors than individuals with WS and TD children. In a pseudoword dictation task, both individuals with DS and WS showed more errors than TD children. Our results showed good abilities in individuals with ID in different aspects of writing, involving not only low-level transcription skills but also high-level composition skills. Contrary to the pessimistic view, considering individuals with ID vulnerable for failure, our results indicate that the presence of ID does not prevent the achievement of writing skills. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Cost-Effectiveness Analysis of an Automated Medication System Implemented in a Danish Hospital Setting.

    PubMed

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. The influence of monetary punishment on cognitive control in abstinent cocaine-users*

    PubMed Central

    Hester, Robert; Bell, Ryan P.; Foxe, John J.; Garavan, Hugh

    2013-01-01

    Background Dependent drug users show a diminished neural response to punishment, in both limbic and cortical regions, though it remains unclear how such changes influence cognitive processes critical to addiction. To assess this relationship, we examined the influence of monetary punishment on inhibitory control and adaptive post-error behaviour in abstinent cocaine dependent (CD) participants. Methods 15 abstinent CD and 15 matched control participants performed a Go/No-go response inhibition task, which administered monetary fines for failed response inhibition, during collection of fMRI data. Results CD participants showed reduced inhibitory control and significantly less adaptive post-error slowing in response to punishment, when compared to controls. The diminished behavioural punishment sensitivity shown by CD participants was associated with significant hypoactive error-related BOLD responses in the dorsal anterior cingulate cortex (ACC), right insula and right prefrontal regions. Specifically, CD participants’ error-related response in these regions was not modulated by the presence of punishment, whereas control participants’ response showed a significant BOLD increase during punished errors. Conclusions CD participants showed a blunted response to failed control (errors) that was not modulated by punishment. Consistent with previous findings of reduced sensitivity to monetary loss in cocaine users, we further demonstrate that such insensitivity is associated with an inability to increase cognitive control in the face of negative consequences, a core symptom of addiction. The pattern of deficits in the CD group may have implications for interventions that attempt to improve cognitive control in drug dependent groups via positive/negative incentives. PMID:23791040

  4. Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials

    PubMed Central

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2013-01-01

    This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212

  5. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  6. Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope

    PubMed Central

    Li, Xuyou; Guang, Xingxing; Xu, Zhenlong; Li, Guangchun

    2017-01-01

    Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG. PMID:28880203

  7. Temperature Dependence of Faraday Effect-Induced Bias Error in a Fiber Optic Gyroscope.

    PubMed

    Li, Xuyou; Liu, Pan; Guang, Xingxing; Xu, Zhenlong; Guan, Lianwu; Li, Guangchun

    2017-09-07

    Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environments, such as magnetic field and temperature field variation, is necessary for its practical applications. This paper presents an investigation of Faraday effect-induced bias error of IFOG under varying temperature. Jones matrix method is utilized to formulize the temperature dependence of Faraday effect-induced bias error. Theoretical results show that the Faraday effect-induced bias error changes with the temperature in the non-skeleton polarization maintaining (PM) fiber coil. This phenomenon is caused by the temperature dependence of linear birefringence and Verdet constant of PM fiber. Particularly, Faraday effect-induced bias errors of two polarizations always have opposite signs that can be compensated optically regardless of the changes of the temperature. Two experiments with a 1000 m non-skeleton PM fiber coil are performed, and the experimental results support these theoretical predictions. This study is promising for improving the bias stability of IFOG.

  8. The many places of frequency: evidence for a novel locus of the lexical frequency effect in word production.

    PubMed

    Knobel, Mark; Finkbeiner, Matthew; Caramazza, Alfonso

    2008-03-01

    The effect of lexical frequency on language-processing tasks is exceptionally reliable. For example, pictures with higher frequency names are named faster and more accurately than those with lower frequency names. Experiments with normal participants and patients strongly suggest that this production effect arises at the level of lexical access. Further work has suggested that within lexical access this effect arises at the level of lexical representations. Here we present patient E.C. who shows an effect of lexical frequency on his nonword error rate. The best explanation of his performance is that there is an additional locus of frequency at the interface of lexical and segmental representational levels. We confirm this hypothesis by showing that only computational models with frequency at this new locus can produce a similar error pattern to that of patient E.C. Finally, in an analysis of a large group of Italian patients, we show that there exist patients who replicate E.C.'s pattern of results and others who show the complementary pattern of frequency effects on semantic error rates. Our results combined with previous findings suggest that frequency plays a role throughout the process of lexical access.

  9. Optimal error analysis of the intraseasonal convection due to uncertainties of the sea surface temperature in a coupled model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojing; Tang, Youmin; Yao, Zhixiong

    2017-04-01

    The predictability of the convection related to the Madden-Julian Oscillation (MJO) is studied using a coupled model CESM (Community Earth System Model) and the climatically relevant singular vector (CSV) approach. The CSV approach is an ensemble-based strategy to calculate the optimal initial error on climate scale. In this study, we focus on the optimal initial error of the sea surface temperature in Indian Ocean, where is the location of the MJO onset. Six MJO events are chosen from the 10 years model simulation output. The results show that the large values of the SVs are mainly located in the bay of Bengal and the south central IO (around (25°S, 90°E)), which is a meridional dipole-like pattern. The fast error growth of the CSVs have important impacts on the prediction of the convection related to the MJO. The initial perturbations with the SV pattern result in the deep convection damping more quickly in the east Pacific Ocean. Moreover, the sensitivity studies of the CSVs show that different initial fields do not affect the CSVs obviously, while the perturbation domain is a more responsive factor to the CSVs. The rapid growth of the CSVs is found to be related to the west bay of Bengal, where the wind stress starts to be perturbed due to the CSV initial error. These results contribute to the establishment of an ensemble prediction system, as well as the optimal observation network. In addition, the analysis of the error growth can provide us some enlightment about the relationship between SST and the intraseasonal convection related to the MJO.

  10. Assessment of Metronidazole Susceptibility in Helicobacter pylori: Statistical Validation and Error Rate Analysis of Breakpoints Determined by the Disk Diffusion Test

    PubMed Central

    Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José

    1999-01-01

    Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543

  11. Hydrological modelling of the Chaohe Basin in China: Statistical model formulation and Bayesian inference

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Reichert, Peter; Abbaspour, Karim C.; Yang, Hong

    2007-07-01

    SummaryCalibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box-Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)-(iv) mentioned above.

  12. Event-Related-Potential (ERP) Correlates of Performance Monitoring in Adults With Attention-Deficit Hyperactivity Disorder (ADHD)

    PubMed Central

    Marquardt, Lynn; Eichele, Heike; Lundervold, Astri J.; Haavik, Jan; Eichele, Tom

    2018-01-01

    Introduction: Attention-deficit hyperactivity disorder (ADHD) is one of the most frequent neurodevelopmental disorders in children and tends to persist into adulthood. Evidence from neuropsychological, neuroimaging, and electrophysiological studies indicates that alterations of error processing are core symptoms in children and adolescents with ADHD. To test whether adults with ADHD show persisting deficits and compensatory processes, we investigated performance monitoring during stimulus-evaluation and response-selection, with a focus on errors, as well as within-group correlations with symptom scores. Methods: Fifty-five participants (27 ADHD and 28 controls) aged 19–55 years performed a modified flanker task during EEG recording with 64 electrodes, and the ADHD and control groups were compared on measures of behavioral task performance, event-related potentials of performance monitoring (N2, P3), and error processing (ERN, Pe). Adult ADHD Self-Report Scale (ASRS) was used to assess ADHD symptom load. Results: Adults with ADHD showed higher error rates in incompatible trials, and these error rates correlated positively with the ASRS scores. Also, we observed lower P3 amplitudes in incompatible trials, which were inversely correlated with symptom load in the ADHD group. Adults with ADHD also displayed reduced error-related ERN and Pe amplitudes. There were no significant differences in reaction time (RT) and RT variability between the two groups. Conclusion: Our findings show deviations of electrophysiological measures, suggesting reduced effortful engagement of attentional and error-monitoring processes in adults with ADHD. Associations between ADHD symptom scores, event-related potential amplitudes, and poorer task performance in the ADHD group further support this notion. PMID:29706908

  13. A new open-loop fiber optic gyro error compensation method based on angular velocity error modeling.

    PubMed

    Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing

    2015-02-27

    With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.42%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.

  14. A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling

    PubMed Central

    Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing

    2015-01-01

    With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.2%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity. PMID:25734642

  15. Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-04-01

    Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  16. Correlation of clinical predictions and surgical results in maxillary superior repositioning.

    PubMed

    Tabrizi, Reza; Zamiri, Barbad; Kazemi, Hamidreza

    2014-05-01

    This is a prospective study to evaluate the accuracy of clinical predictions related to surgical results in subjects who underwent maxillary superior repositioning without anterior-posterior movement. Surgeons' predictions according to clinical (tooth show at rest and at the maximum smile) and cephalometric evaluation were documented for the amount of maxillary superior repositioning. Overcorrection or undercorrection was documented for every subject 1 year after the operations. Receiver operating characteristic curve test was used to find a cutoff point in prediction errors and to determine positive predictive value (PPV) and negative predictive value. Forty subjects (14 males and 26 females) were studied. Results showed a significant difference between changes in the tooth show at rest and at the maximum smile line before and after surgery. Analysis of the data demonstrated no correlation between the predictive data and the surgical results. The incidence of undercorrection (25%) was more common than overcorrection (7.5%). The cutoff point for errors in predictions was 5 mm for tooth show at rest and 15 mm at the maximum smile. When the amount of the presurgical tooth show at rest was more than 5 mm, 50.5% of clinical predictions did not match the clinical results (PPV), and 75% of clinical predictions showed the same results when the tooth show was less than 5 mm (negative predictive value). When the amount of presurgical tooth shown in the maximum smile line was more than 15 mm, 75% of clinical predictions did not match with clinical results (PPV), and 25% of the predictions had the same results because the tooth show at the maximum smile was lower than 15 mm. Clinical predictions according to the tooth show at rest and at the maximum smile have a poor correlation with clinical results in maxillary superior repositioning for vertical maxillary excess. The risk of errors in predictions increased when the amount of superior repositioning of the maxilla increased. Generally, surgeons have a tendency to undercorrect rather than overcorrect, although clinical prediction is an original guideline for surgeons, and it may be associated with variable clinical results.

  17. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  18. Performability modeling based on real data: A casestudy

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1987-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.

  19. Effect of lethality on the extinction and on the error threshold of quasispecies.

    PubMed

    Tejero, Hector; Marín, Arturo; Montero, Francisco

    2010-02-21

    In this paper the effect of lethality on error threshold and extinction has been studied in a population of error-prone self-replicating molecules. For given lethality and a simple fitness landscape, three dynamic regimes can be obtained: quasispecies, error catastrophe, and extinction. Using a simple model in which molecules are classified as master, lethal and non-lethal mutants, it is possible to obtain the mutation rates of the transitions between the three regimes analytically. The numerical resolution of the extended model, in which molecules are classified depending on their Hamming distance to the master sequence, confirms the results obtained in the simple model and shows how an error catastrophe regime changes when lethality is taken in account. (c) 2009 Elsevier Ltd. All rights reserved.

  20. Test-to-Test Repeatability of Results From a Subsonic Wing-Body Configuration in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Pendergraft, Odis C., Jr.

    2000-01-01

    Results from three wind tunnel tests in the National Transonic Facility of a model of an advanced-technology, subsonic-transport wing-body configuration have been analyzed to assess the test-to-test repeatability of several aerodynamic parameters. The scatter, as measured by the prediction interval, in the longitudinal force and moment coefficients increases as the Mach number increases. Residual errors with and without the ESP tubes installed suggest a bias leading to lower drag with the tubes installed. Residual errors as well as average values of the longitudinal force and moment coefficients show that there are small bias errors between the different tests.

  1. Error Argumentation Enhance Adaptability in Adults With Low Motor Ability.

    PubMed

    Lee, Chi-Mei; Bo, Jin

    2016-01-01

    The authors focused on young adults with varying degrees of motor difficulties and examined their adaptability in a visuomotor adaptation task where the visual feedback of participants' movement error was presented with either 1:1 ratio (i.e., regular feedback schedule) or 1:2 ratio (i.e., enhanced feedback schedule). Within-subject design was used with two feedback schedules counter-balanced and separated for 10 days. Results revealed that participants with greater motor difficulties showed less adaptability than those with normal motor abilities in the regular feedback schedule; however, all participants demonstrated similar level of adaptability in the enhanced feedback schedule. The results suggest that error argumentation enhances adaptability in adults with low motor ability.

  2. Hierarchical Learning Induces Two Simultaneous, But Separable, Prediction Errors in Human Basal Ganglia

    PubMed Central

    Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew

    2013-01-01

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092

  3. Transfer Alignment Error Compensator Design Based on Robust State Estimation

    NASA Astrophysics Data System (ADS)

    Lyou, Joon; Lim, You-Chol

    This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.

  4. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    NASA Astrophysics Data System (ADS)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  5. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  6. Effects of shape, size, and chromaticity of stimuli on estimated size in normally sighted, severely myopic, and visually impaired students.

    PubMed

    Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching

    2010-06-01

    Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.

  7. Model-based influences on humans’ choices and striatal prediction errors

    PubMed Central

    Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.

    2011-01-01

    Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563

  8. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  9. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.

  10. Atmospheric modeling to assess wind dependence in tracer dilution method measurements of landfill methane emissions.

    PubMed

    Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T

    2018-03-01

    The short-term temporal variability of landfill methane emissions is not well understood due to uncertainty in measurement methods. Significant variability is seen over short-term measurement campaigns with the tracer dilution method (TDM), but this variability may be due in part to measurement error rather than fluctuations in the actual landfill emissions. In this study, landfill methane emissions and TDM-measured emissions are simulated over a real landfill in Delaware, USA using the Weather Research and Forecasting model (WRF) for two emissions scenarios. In the steady emissions scenario, a constant landfill emissions rate is prescribed at each model grid point on the surface of the landfill. In the unsteady emissions scenario, emissions are calculated at each time step as a function of the local surface wind speed, resulting in variable emissions over each 1.5-h measurement period. The simulation output is used to assess the standard deviation and percent error of the TDM-measured emissions. Eight measurement periods are simulated over two different days to look at different conditions. Results show that standard deviation of the TDM- measured emissions does not increase significantly from the steady emissions simulations to the unsteady emissions scenarios, indicating that the TDM may have inherent errors in its prediction of emissions fluctuations. Results also show that TDM error does not increase significantly from the steady to the unsteady emissions simulations. This indicates that introducing variability to the landfill emissions does not increase errors in the TDM at this site. Across all simulations, TDM errors range from -15% to 43%, consistent with the range of errors seen in previous TDM studies. Simulations indicate diurnal variations of methane emissions when wind effects are significant, which may be important when developing daily and annual emissions estimates from limited field data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Color-motion feature-binding errors are mediated by a higher-order chromatic representation.

    PubMed

    Shevell, Steven K; Wang, Wei

    2016-03-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.

  12. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  13. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  14. Highly improved staggered quarks on the lattice with applications to charm physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follana, E.; Davies, C.; Wong, K.

    2007-03-01

    We use perturbative Symanzik improvement to create a new staggered-quark action (HISQ) that has greatly reduced one-loop taste-exchange errors, no tree-level order a{sup 2} errors, and no tree-level order (am){sup 4} errors to leading order in the quark's velocity v/c. We demonstrate with simulations that the resulting action has taste-exchange interactions that are 3-4 times smaller than the widely used ASQTAD action. We show how to bound errors due to taste exchange by comparing ASQTAD and HISQ simulations, and demonstrate with simulations that such errors are likely no more than 1% when HISQ is used for light quarks at latticemore » spacings of 1/10 fm or less. The suppression of (am){sup 4} errors also makes HISQ the most accurate discretization currently available for simulating c quarks. We demonstrate this in a new analysis of the {psi}-{eta}{sub c} mass splitting using the HISQ action on lattices where am{sub c}=0.43 and 0.66, with full-QCD gluon configurations (from MILC). We obtain a result of 111(5) MeV which compares well with the experiment. We discuss applications of this formalism to D physics and present our first high-precision results for D{sub s} mesons.« less

  15. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  16. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boehnke, E McKenzie; DeMarco, J; Steers, J

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less

  17. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting

    PubMed Central

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-01-01

    Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806

  18. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  19. DNAPL MAPPING AND WATER SATURATION MEASUREMENTS IN 2-D MODELS USING LIGHT TRANSMISSION VISUALIZATION (LTV) TECHNIQUE

    EPA Science Inventory

    • LTV can be used to characterize free phase PCE architecture in 2-D flow chambers without using a dye. • Results to date suggest that error in PCE detection using LTV can be less than 10% if the imaging system is optimized. • Mass balance calculations show a maximum error of 9...

  20. 26 CFR 1.668(b)-3A - Computation of the beneficiary's income and tax for a prior taxable year.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... either the exact method or the short-cut method shall be determined by reference to the information... under section 6501 has expired, and such return shows a mathematical error on its face which resulted in... after the correction of such mathematical errors, and the beneficiary shall be credited for the correct...

  1. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.

  2. Influence of incident angle on the decoding in laser polarization encoding guidance

    NASA Astrophysics Data System (ADS)

    Zhou, Muchun; Chen, Yanru; Zhao, Qi; Xin, Yu; Wen, Hongyuan

    2009-07-01

    Dynamic detection of polarization states is very important for laser polarization coding guidance systems. In this paper, a set of dynamic polarization decoding and detection system used in laser polarization coding guidance was designed. Detection process of the normal incident polarized light is analyzed with Jones Matrix; the system can effectively detect changes in polarization. Influence of non-normal incident light on performance of polarization decoding and detection system is studied; analysis showed that changes in incident angle will have a negative impact on measure results, the non-normal incident influence is mainly caused by second-order birefringence and polarization sensitivity effect generated in the phase delay and beam splitter prism. Combined with Fresnel formula, decoding errors of linearly polarized light, elliptically polarized light and circularly polarized light with different incident angles into the detector are calculated respectively, the results show that the decoding errors increase with increase of incident angle. Decoding errors have relations with geometry parameters, material refractive index of wave plate, polarization beam splitting prism. Decoding error can be reduced by using thin low-order wave-plate. Simulation of detection of polarized light with different incident angle confirmed the corresponding conclusions.

  3. Comparative analysis of operational forecasts versus actual weather conditions in airline flight planning, volume 4

    NASA Technical Reports Server (NTRS)

    Keitz, J. F.

    1982-01-01

    The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 4 of the four major tasks included in the study. Task 4 uses flight plan segment wind and temperature differences as indicators of dates and geographic areas for which significant forecast errors may have occurred. An in-depth analysis is then conducted for the days identified. The analysis show that significant errors occur in the operational forecast on 15 of the 33 arbitrarily selected days included in the study. Wind speeds in an area of maximum winds are underestimated by at least 20 to 25 kts. on 14 of these days. The analysis also show that there is a tendency to repeat the same forecast errors from prog to prog. Also, some perceived forecast errors from the flight plan comparisons could not be verified by visual inspection of the corresponding National Meteorological Center forecast and analyses charts, and it is likely that they are the result of weather data interpolation techniques or some other data processing procedure in the airlines' flight planning systems.

  4. Accounting for independent nondifferential misclassification does not increase certainty that an observed association is in the correct direction.

    PubMed

    Greenland, Sander; Gustafson, Paul

    2006-07-01

    Researchers sometimes argue that their exposure-measurement errors are independent of other errors and are nondifferential with respect to disease, resulting in estimation bias toward the null. Among well-known problems with such arguments are that independence and nondifferentiality are harder to satisfy than ordinarily appreciated (e.g., because of correlation of errors in questionnaire items, and because of uncontrolled covariate effects on error rates); small violations of independence or nondifferentiality may lead to bias away from the null; and, if exposure is polytomous, the bias produced by independent nondifferential error is not always toward the null. The authors add to this list by showing that, in a 2 x 2 table (for which independent nondifferential error produces bias toward the null), accounting for independent nondifferential error does not reduce the p value even though it increases the point estimate. Thus, such accounting should not increase certainty that an association is present.

  5. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  6. Event-related potentials for post-error and post-conflict slowing.

    PubMed

    Chang, Andrew; Chen, Chien-Chung; Li, Hsin-Hung; Li, Chiang-Shan R

    2014-01-01

    In a reaction time task, people typically slow down following an error or conflict, each called post-error slowing (PES) and post-conflict slowing (PCS). Despite many studies of the cognitive mechanisms, the neural responses of PES and PCS continue to be debated. In this study, we combined high-density array EEG and a stop-signal task to examine event-related potentials of PES and PCS in sixteen young adult participants. The results showed that the amplitude of N2 is greater during PES but not PCS. In contrast, the peak latency of N2 is longer for PCS but not PES. Furthermore, error-positivity (Pe) but not error-related negativity (ERN) was greater in the stop error trials preceding PES than non-PES trials, suggesting that PES is related to participants' awareness of the error. Together, these findings extend earlier work of cognitive control by specifying the neural correlates of PES and PCS in the stop signal task.

  7. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  8. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  9. A study of the local pressure field in turbulent shear flow and its relation to aerodynamic noise generation

    NASA Technical Reports Server (NTRS)

    Jones, B. G.; Planchon, H. P., Jr.

    1973-01-01

    Work during the period of this report has been in three areas: (1) pressure transducer error analysis, (2) fluctuating velocity and pressure measurements in the NASA Lewis 6-inch diameter quiet jet facility, and (3) measurement analysis. A theory was developed and experimentally verified to quantify the pressure transducer velocity interference error. The theory and supporting experimental evidence show that the errors are a function of the velocity field's turbulent structure. It is shown that near the mixing layer center the errors are negligible. Turbulent velocity and pressure measurements were made in the NASA Lewis quiet jet facility. Some preliminary results are included.

  10. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  11. An improved VSS NLMS algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  12. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  13. Age-related changes in error processing in young children: a school-based investigation.

    PubMed

    Grammer, Jennie K; Carrasco, Melisa; Gehring, William J; Morrison, Frederick J

    2014-07-01

    Growth in executive functioning (EF) skills play a role children's academic success, and the transition to elementary school is an important time for the development of these abilities. Despite this, evidence concerning the development of the ERP components linked to EF, including the error-related negativity (ERN) and the error positivity (Pe), over this period is inconclusive. Data were recorded in a school setting from 3- to 7-year-old children (N=96, mean age=5 years 11 months) as they performed a Go/No-Go task. Results revealed the presence of the ERN and Pe on error relative to correct trials at all age levels. Older children showed increased response inhibition as evidenced by faster, more accurate responses. Although developmental changes in the ERN were not identified, the Pe increased with age. In addition, girls made fewer mistakes and showed elevated Pe amplitudes relative to boys. Based on a representative school-based sample, findings indicate that the ERN is present in children as young as 3, and that development can be seen in the Pe between ages 3 and 7. Results varied as a function of gender, providing insight into the range of factors associated with developmental changes in the complex relations between behavioral and electrophysiological measures of error processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Artificial neural networks as alternative tool for minimizing error predictions in manufacturing ultradeformable nanoliposome formulations.

    PubMed

    León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa

    2018-01-01

    This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.

  15. Image-guided spatial localization of heterogeneous compartments for magnetic resonance

    PubMed Central

    An, Li; Shen, Jun

    2015-01-01

    Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977

  16. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-07-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAOs). Using analytic expressions and results from 1000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAOs, and the cosmological information in them. We find that (a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; (b) photo-z errors decrease the smearing of BAOs due to non-linear redshift-space distortions (RSDs) by giving less weight to line-of-sight modes; and (c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  17. The effect of photometric redshift uncertainties on galaxy clustering and baryonic acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos

    2018-04-01

    In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAO). Using analytic expressions and results from 1 000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAO, and the cosmological information in them. We find that: a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; b) photo-z errors decrease the smearing of BAO due to non-linear redshift-space distortions (RSD) by giving less weight to line-of-sight modes; and c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.

  18. Refractive errors among children, adolescents and adults attending eye clinics in Mexico

    PubMed Central

    Gomez-Salazar, Francisco; Campos-Romero, Abraham; Gomez-Campaña, Humberto; Cruz-Zamudio, Cinthia; Chaidez-Felix, Mariano; Leon-Sicairos, Nidia; Velazquez-Roman, Jorge; Flores-Villaseñor, Hector; Muro-Amador, Secundino; Guadron-Llanos, Alma Marlene; Martinez-Garcia, Javier J.; Murillo-Llanes, Joel; Sanchez-Cuen, Jaime; Llausas-Vargas, Alejando; Alapizco-Castro, Gerardo; Irineo-Cabrales, Ana; Graue-Hernandez, Enrique; Ramirez-Luquin, Tito; Canizalez-Roman, Adrian

    2017-01-01

    AIM To assess the proportion of refractive errors in the Mexican population that visited primary care optometry clinics in fourteen states of Mexico. METHODS Refractive data from 676 856 patients aged 6 to 90y were collected from optometry clinics in fourteen states of Mexico between 2014 and 2015. The refractive errors were classified by the spherical equivalent (SE), as follows: sphere+½ cylinder. Myopia (SE>-0.50 D), hyperopia (SE>+0.50 D), emmetropia (-0.50≤SE≤+0.50), and astigmatism alone (cylinder≥-0.25 D). A negative cylinder was selected as a notation. RESULTS The proportion (95% confidence interval) among all of the subjects was hyperopia 21.0% (20.9-21.0), emmetropia 40.7% (40.5-40.8), myopia 24.8% (24.7-24.9) and astigmatism alone 13.5% (13.4-13.5). Myopia was the most common refractive error and frequency seemed to increase among the young population (10 to 29 years old), however, hyperopia increased among the aging population (40 to 79 years old), and astigmatism alone showed a decreasing trend with age (6 to 90y; from 19.7% to 10.8%). There was a relationship between age and all refractive errors (approximately 60%, aged 50 and older). The proportion of any clinically important refractive error was higher in males (61.2%) than in females (58.3%; P<0.0001). From fourteen states that collected information, the proportion of refractive error showed variability in different geographical areas of Mexico. CONCLUSION Myopia is the most common refractive error in the population studied. This study provides the first data on refractive error in Mexico. Further programs and studies must be developed to address the refractive errors needs of the Mexican population. PMID:28546940

  19. AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.

    PubMed

    Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia

    2017-03-14

    Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.

  20. On how to avoid input and structural uncertainties corrupt the inference of hydrological parameters using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Hernández, Mario R.; Francés, Félix

    2015-04-01

    One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.

  1. Achievable flatness in a large microwave power transmitting antenna

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1980-01-01

    A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.

  2. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  3. The Wind Forecast Improvement Project (WFIP). A Public/Private Partnership for Improving Short Term Wind Energy Forecasts and Quantifying the Benefits of Utility Operations -- the Northern Study Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, Cathy

    2014-04-30

    This report contains the results from research aimed at improving short-range (0-6 hour) hub-height wind forecasts in the NOAA weather forecast models through additional data assimilation and model physics improvements for use in wind energy forecasting. Additional meteorological observing platforms including wind profilers, sodars, and surface stations were deployed for this study by NOAA and DOE, and additional meteorological data at or near wind turbine hub height were provided by South Dakota State University and WindLogics/NextEra Energy Resources over a large geographical area in the U.S. Northern Plains for assimilation into NOAA research weather forecast models. The resulting improvements inmore » wind energy forecasts based on the research weather forecast models (with the additional data assimilation and model physics improvements) were examined in many different ways and compared with wind energy forecasts based on the current operational weather forecast models to quantify the forecast improvements important to power grid system operators and wind plant owners/operators participating in energy markets. Two operational weather forecast models (OP_RUC, OP_RAP) and two research weather forecast models (ESRL_RAP, HRRR) were used as the base wind forecasts for generating several different wind power forecasts for the NextEra Energy wind plants in the study area. Power forecasts were generated from the wind forecasts in a variety of ways, from very simple to quite sophisticated, as they might be used by a wide range of both general users and commercial wind energy forecast vendors. The error characteristics of each of these types of forecasts were examined and quantified using bulk error statistics for both the local wind plant and the system aggregate forecasts. The wind power forecast accuracy was also evaluated separately for high-impact wind energy ramp events. The overall bulk error statistics calculated over the first six hours of the forecasts at both the individual wind plant and at the system-wide aggregate level over the one year study period showed that the research weather model-based power forecasts (all types) had lower overall error rates than the current operational weather model-based power forecasts, both at the individual wind plant level and at the system aggregate level. The bulk error statistics of the various model-based power forecasts were also calculated by season and model runtime/forecast hour as power system operations are more sensitive to wind energy forecast errors during certain times of year and certain times of day. The results showed that there were significant differences in seasonal forecast errors between the various model-based power forecasts. The results from the analysis of the various wind power forecast errors by model runtime and forecast hour showed that the forecast errors were largest during the times of day that have increased significance to power system operators (the overnight hours and the morning/evening boundary layer transition periods), but the research weather model-based power forecasts showed improvement over the operational weather model-based power forecasts at these times.« less

  4. Effect of Vertical Rate Error on Recovery from Loss of Well Clear Between UAS and Non-Cooperative Intruders

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2016-01-01

    When an Unmanned Aircraft System (UAS) encounters an intruder and is unable to maintain required temporal and spatial separation between the two vehicles, it is referred to as a loss of well-clear. In this state, the UAS must make its best attempt to regain separation while maximizing the minimum separation between itself and the intruder. When encountering a non-cooperative intruder (an aircraft operating under visual flight rules without ADS-B or an active transponder) the UAS must rely on the radar system to provide the intruders location, velocity, and heading information. As many UAS have limited climb and descent performance, vertical position andor vertical rate errors make it difficult to determine whether an intruder will pass above or below them. To account for that, there is a proposal by RTCA Special Committee 228 to prohibit guidance systems from providing vertical guidance to regain well-clear to UAS in an encounter with a non-cooperative intruder unless their radar system has vertical position error below 175 feet (95) and vertical velocity errors below 200 fpm (95). Two sets of fast-time parametric studies was conducted, each with 54000 pairwise encounters between a UAS and non-cooperative intruder to determine the suitability of offering vertical guidance to regain well clear to a UAS in the presence of radar sensor noise. The UAS was not allowed to maneuver until it received well-clear recovery guidance. The maximum severity of the loss of well-clear was logged and used as the primary indicator of the separation achieved by the UAS. One set of 54000 encounters allowed the UAS to maneuver either vertically or horizontally, while the second permitted horizontal maneuvers, only. Comparing the two data sets allowed researchers to see the effect of allowing vertical guidance to a UAS for a particular encounter and vertical rate error. Study results show there is a small reduction in the average severity of a loss of well-clear when vertical maneuvers are suppressed, for all vertical error rate thresholds examined. However, results also show that in roughly 35 of the encounters where a vertical maneuver was selected, forcing the UAS to do a horizontal maneuver instead increased the severity of the loss of well-clear for that encounter. Finally, results showed a small reduction in the number of severe losses of well-clear when the high performance UAS (2000 fpm climb and descent rate) was allowed to maneuver vertically, and the vertical rate error was below 500 fpm. Overall, the results show that using a single vertical rate threshold is not advisable, and that limiting a UAS to horizontal maneuvers when vertical rate errors are above 175 fpm can make a UAS less safe about a third of the time. It is suggested that the hard limit be removed, and system manufacturers instructed to account for their own UAS performance, as well as vertical rate error and encounter geometry, when determining whether or not to provide vertical guidance to regain well-clear.

  5. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  6. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  7. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  8. The effect of toe marker placement error on joint kinematics and muscle forces using OpenSim gait simulation.

    PubMed

    Xu, Hang; Merryweather, Andrew; Bloswick, Donald; Mao, Qi; Wang, Tong

    2015-01-01

    Marker placement can be a significant source of error in biomechanical studies of human movement. The toe marker placement error is amplified by footwear since the toe marker placement on the shoe only relies on an approximation of underlying anatomical landmarks. Three total knee replacement subjects were recruited and three self-speed gait trials per subject were collected. The height variation between toe and heel markers of four types of footwear was evaluated from the results of joint kinematics and muscle forces using OpenSim. The reference condition was considered as the same vertical height of toe and heel markers. The results showed that the residual variances for joint kinematics had an approximately linear relationship with toe marker placement error for lower limb joints. Ankle dorsiflexion/plantarflexion is most sensitive to toe marker placement error. The influence of toe marker placement error is generally larger for hip flexion/extension and rotation than hip abduction/adduction and knee flexion/extension. The muscle forces responded to the residual variance of joint kinematics to various degrees based on the muscle function for specific joint kinematics. This study demonstrates the importance of evaluating marker error for joint kinematics and muscle forces when explaining relative clinical gait analysis and treatment intervention.

  9. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  10. Translation fidelity coevolves with longevity.

    PubMed

    Ke, Zhonghe; Mallik, Pramit; Johnson, Adam B; Luna, Facundo; Nevo, Eviatar; Zhang, Zhengdong D; Gladyshev, Vadim N; Seluanov, Andrei; Gorbunova, Vera

    2017-10-01

    Whether errors in protein synthesis play a role in aging has been a subject of intense debate. It has been suggested that rare mistakes in protein synthesis in young organisms may result in errors in the protein synthesis machinery, eventually leading to an increasing cascade of errors as organisms age. Studies that followed generally failed to identify a dramatic increase in translation errors with aging. However, whether translation fidelity plays a role in aging remained an open question. To address this issue, we examined the relationship between translation fidelity and maximum lifespan across 17 rodent species with diverse lifespans. To measure translation fidelity, we utilized sensitive luciferase-based reporter constructs with mutations in an amino acid residue critical to luciferase activity, wherein misincorporation of amino acids at this mutated codon re-activated the luciferase. The frequency of amino acid misincorporation at the first and second codon positions showed strong negative correlation with maximum lifespan. This correlation remained significant after phylogenetic correction, indicating that translation fidelity coevolves with longevity. These results give new life to the role of protein synthesis errors in aging: Although the error rate may not significantly change with age, the basal rate of translation errors is important in defining lifespan across mammals. © 2017 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.

  11. Acetaminophen attenuates error evaluation in cortex.

    PubMed

    Randles, Daniel; Kam, Julia W Y; Heine, Steven J; Inzlicht, Michael; Handy, Todd C

    2016-06-01

    Acetaminophen has recently been recognized as having impacts that extend into the affective domain. In particular, double blind placebo controlled trials have revealed that acetaminophen reduces the magnitude of reactivity to social rejection, frustration, dissonance and to both negatively and positively valenced attitude objects. Given this diversity of consequences, it has been proposed that the psychological effects of acetaminophen may reflect a widespread blunting of evaluative processing. We tested this hypothesis using event-related potentials (ERPs). Sixty-two participants received acetaminophen or a placebo in a double-blind protocol and completed the Go/NoGo task. Participants' ERPs were observed following errors on the Go/NoGo task, in particular the error-related negativity (ERN; measured at FCz) and error-related positivity (Pe; measured at Pz and CPz). Results show that acetaminophen inhibits the Pe, but not the ERN, and the magnitude of an individual's Pe correlates positively with omission errors, partially mediating the effects of acetaminophen on the error rate. These results suggest that recently documented affective blunting caused by acetaminophen may best be described as an inhibition of evaluative processing. They also contribute to the growing work suggesting that the Pe is more strongly associated with conscious awareness of errors relative to the ERN. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. Preliminary GOES-R ABI navigation and registration assessment results

    NASA Astrophysics Data System (ADS)

    Tan, B.; Dellomo, J.; Wolfe, R. E.; Reth, A. D.

    2017-12-01

    The US Geostationary Operational Environmental Satellite - R Series (GOES-R) was launched on November 19, 2016, and was designated GOESR-16 upon reaching geostationary orbit ten days later. The Advanced Baseline Imager (ABI) is the primary instrument on the GOES-R series for imaging Earth's surface and atmosphere to aid in weather prediction and climate monitoring. We developed algorithms and software for independent verification of the ABI Image Navigation and Registration (INR). Since late January 2017, four INR metrics have been continuously generated to monitor the ABI INR performance: navigation (NAV) error, channel-to-channel registration (CCR) error, frame-to-frame registration (FFR) error, and within-frame registration (WIFR) error. In this paper, we will describe the fundamental algorithm used for the image registration and briefly discuss the processing flow of INR Performance Assessment Tool Set (IPATS) developed for ABI INR. The assessment of the accuracy shows that IPATS measurements error is about 1/20 of the size of a pixel. Then the GOES-16 NAV assessments results, the primary metric, from January to August 2017, will be presented. The INR has improved over time as post-launch tests were performed and corrections were applied. The mean NAV error of the visible and near infrared (VNIR) channels dropped from 20 μrad in January to around 5 μrad (+/-4 μrad, 1 σ) in June, while the mean NAV error of long wave infrared (LWIR) channels dropped from around 70 μrad in January to around 5 μrad (+/-15 μrad, 1 σ) in June. A full global ABI image is composed with 22 east-west direction swaths. The swath-wise NAV error analysis shows that there was some variation in the mean swath-wise NAV errors. The variations are about as much as 20% of the scene NAV mean errors. As expected, the swaths over the tropical area have far fewer valid assessments (matchups) than those in mid-latitude region due to cloud coverage. It was also found that there was a rotation (clocking) of the focal plane of LWIR that was seen in both the NAV and CCR results. The rotation was corrected by an INR update in June 2017. Through deep-dive examinations of the scenes with large mean and/or variation in INR errors, we validated that IPATS is an excellent tool for assessing and improving the GOES-16 ABI INR and is also useful in INR long-term monitoring.

  13. Strong Converse Exponents for a Quantum Channel Discrimination Problem and Quantum-Feedback-Assisted Communication

    NASA Astrophysics Data System (ADS)

    Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.

    2016-06-01

    This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.

  14. Moisture Forecast Bias Correction in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D.

    1999-01-01

    Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.

  15. ReQON: a Bioconductor package for recalibrating quality scores from next-generation sequencing data

    PubMed Central

    2012-01-01

    Background Next-generation sequencing technologies have become important tools for genome-wide studies. However, the quality scores that are assigned to each base have been shown to be inaccurate. If the quality scores are used in downstream analyses, these inaccuracies can have a significant impact on the results. Results Here we present ReQON, a tool that recalibrates the base quality scores from an input BAM file of aligned sequencing data using logistic regression. ReQON also generates diagnostic plots showing the effectiveness of the recalibration. We show that ReQON produces quality scores that are both more accurate, in the sense that they more closely correspond to the probability of a sequencing error, and do a better job of discriminating between sequencing errors and non-errors than the original quality scores. We also compare ReQON to other available recalibration tools and show that ReQON is less biased and performs favorably in terms of quality score accuracy. Conclusion ReQON is an open source software package, written in R and available through Bioconductor, for recalibrating base quality scores for next-generation sequencing data. ReQON produces a new BAM file with more accurate quality scores, which can improve the results of downstream analysis, and produces several diagnostic plots showing the effectiveness of the recalibration. PMID:22946927

  16. An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Prasanna, Viktor K.

    Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailmentmore » error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10 -7 to 10 -5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.« less

  17. Suppression (but Not Reappraisal) Impairs Subsequent Error Detection: An ERP Study of Emotion Regulation's Resource-Depleting Effect

    PubMed Central

    Wang, Yan; Yang, Lixia; Wang, Yan

    2014-01-01

    Past event-related potentials (ERPs) research shows that, after exerting effortful emotion inhibition, the neural correlates of performance monitoring (e.g. error-related negativity) were weakened. An undetermined issue is whether all forms of emotion regulation uniformly impair later performance monitoring. The present study compared the cognitive consequences of two emotion regulation strategies, namely suppression and reappraisal. Participants were instructed to suppress their emotions while watching a sad movie, or to adopt a neutral and objective attitude toward the movie, or to just watch the movie carefully. Then after a mood scale, all participants completed an ostensibly unrelated Stroop task, during which ERPs (i.e. error-related negativity (ERN), post-error positivity (Pe) and N450) were obtained. Reappraisal group successfully decreased their sad emotion, relative to the other two groups. Compared with participants in the control group and the reappraisal group, those who suppressed their emotions during the sad movie showed reduced ERN after error commission. Participants in the suppression group also made more errors in incongruent Stroop trials than the other two groups. There were no significant main effects or interactions of group for reaction time, Pe and N450. Results suggest that reappraisal is both more effective and less resource-depleting than suppression. PMID:24777113

  18. Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.

    PubMed

    Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth

    2016-06-01

    Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.

  19. Suppression (but not reappraisal) impairs subsequent error detection: an ERP study of emotion regulation's resource-depleting effect.

    PubMed

    Wang, Yan; Yang, Lixia; Wang, Yan

    2014-01-01

    Past event-related potentials (ERPs) research shows that, after exerting effortful emotion inhibition, the neural correlates of performance monitoring (e.g. error-related negativity) were weakened. An undetermined issue is whether all forms of emotion regulation uniformly impair later performance monitoring. The present study compared the cognitive consequences of two emotion regulation strategies, namely suppression and reappraisal. Participants were instructed to suppress their emotions while watching a sad movie, or to adopt a neutral and objective attitude toward the movie, or to just watch the movie carefully. Then after a mood scale, all participants completed an ostensibly unrelated Stroop task, during which ERPs (i.e. error-related negativity (ERN), post-error positivity (Pe) and N450) were obtained. Reappraisal group successfully decreased their sad emotion, relative to the other two groups. Compared with participants in the control group and the reappraisal group, those who suppressed their emotions during the sad movie showed reduced ERN after error commission. Participants in the suppression group also made more errors in incongruent Stroop trials than the other two groups. There were no significant main effects or interactions of group for reaction time, Pe and N450. Results suggest that reappraisal is both more effective and less resource-depleting than suppression.

  20. Interaction between motor ability and skill learning in children: Application of implicit and explicit approaches.

    PubMed

    Maxwell, Jon P; Capio, Catherine M; Masters, Rich S W

    2017-05-01

    The benefits of implicit and explicit motor learning approaches in young adults have been studied extensively, but much less in children. This study investigated the relationship between fundamental motor ability and implicit/explicit learning in children using the errorless learning paradigm. First, the motor ability of 261 children (142 boys, 119 girls) aged 9-12 years (M = 9.74, SD = 0.67) was measured. Second, children with motor ability scores in the upper and lower quartile learned a golf-putting skill in either an errorless (implicit) or errorful (explicit) learning condition. Four groups were formed: Errorless High-Ability (n = 13), Errorless Low-Ability (n = 11), Errorful High-Ability (n = 10), and Errorful Low-Ability (n = 11). Learning consisted of 300 practice trials, while testing included a 50-trial retention test, followed by a 50-trial secondary task transfer test, and another 50-trial retention test. The results showed that for high- and low-ability errorless learners, motor performance was unaffected by the secondary task, as was the case for high-ability errorful learners. Low-ability errorful learners performed worse with a secondary task and were significantly poorer than the corresponding high-ability group. These results suggest that implicit motor learning (errorless) may be beneficial for children with low motor ability. The findings also show a trend that children of high motor ability might benefit from learning explicitly (errorful). Further research is recommended to examine the compatibility of implicit and explicit approaches for children of different abilities.

  1. Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197

    PubMed Central

    Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten

    2015-01-01

    Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813

  2. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  3. Predictive error detection in pianists: a combined ERP and motion capture study

    PubMed Central

    Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari

    2013-01-01

    Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID:24133428

  4. Adaptive color halftoning for minimum perceived error using the blue noise mask

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  5. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  6. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  7. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production

    PubMed Central

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015

  8. Sonority contours in word recognition

    NASA Astrophysics Data System (ADS)

    McLennan, Sean

    2003-04-01

    Contrary to the Generativist distinction between competence and performance which asserts that speech or perception errors are due to random, nonlinguistic factors, it seems likely that errors are principled and possibly governed by some of the same constraints as language. A preliminary investigation of errors modeled after the child's ``Chain Whisper'' game (a degraded stimulus task) suggests that a significant number of recognition errors can be characterized as an improvement in syllable sonority contour towards the linguistically least-marked, voiceless-stop-plus-vowel syllable. An independent study of sonority contours showed that approximately half of the English lexicon can be uniquely identified by their contour alone. Additionally, ``sororities'' (groups of words that share a single sonority contour), surprisingly, show no correlation to familiarity or frequency in either size or membership. Together these results imply that sonority contours may be an important factor in word recognition and in defining word ``neighborhoods.'' Moreover, they suggest that linguistic markedness constraints may be more prevalent in performance-related phenomena than previously accepted.

  9. Comparing Methods to Assess Intraobserver Measurement Error of 3D Craniofacial Landmarks Using Geometric Morphometrics Through a Digitizer Arm.

    PubMed

    Menéndez, Lumila Paula

    2017-05-01

    Intraobserver error (INTRA-OE) is the difference between repeated measurements of the same variable made by the same observer. The objective of this work was to evaluate INTRA-OE from 3D landmarks registered with a Microscribe, in different datasets: (A) the 3D coordinates, (B) linear measurements calculated from A, and (C) the six-first principal component axes. INTRA-OE was analyzed by digitizing 42 landmarks from 23 skulls in three events two weeks apart from each other. Systematic error was tested through repeated measures ANOVA (ANOVA-RM), while random error through intraclass correlation coefficient. Results showed that the largest differences between the three observations were found in the first dataset. Some anatomical points like nasion, ectoconchion, temporosphenoparietal, asterion, and temporomandibular presented the highest INTRA-OE. In the second dataset, local distances had higher INTRA-OE than global distances while the third dataset showed the lowest INTRA-OE. © 2016 American Academy of Forensic Sciences.

  10. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  11. Neurophysiological correlates of error monitoring and inhibitory processing in juvenile violent offenders.

    PubMed

    Vilà-Balló, Adrià; Hdez-Lafuente, Prado; Rostan, Carles; Cunillera, Toni; Rodriguez-Fornells, Antoni

    2014-10-01

    Performance monitoring is crucial for well-adapted behavior. Offenders typically have a pervasive repetition of harmful-impulsive behaviors, despite an awareness of the negative consequences of their actions. However, the link between performance monitoring and aggressive behavior in juvenile offenders has not been closely investigated. Event-related brain potentials (ERPs) were used to investigate performance monitoring in juvenile non-psychopathic violent offenders compared with a well-matched control group. Two ERP components associated with error monitoring, error-related negativity (ERN) and error-positivity (Pe), and two components related to inhibitory processing, the stop-N2 and stop-P3 components, were evaluated using a combined flanker-stop-signal task. The results showed that the amplitudes of the ERN, the stop-N2, the stop-P3, and the standard P3 components were clearly reduced in the offenders group. Remarkably, no differences were observed for the Pe. At the behavioral level, slower stop-signal reaction times were identified for offenders, which indicated diminished inhibitory processing. The present results suggest that the monitoring of one's own behavior is affected in juvenile violent offenders. Specifically, we determined that different aspects of executive function were affected in the studied offenders, including error processing (reduced ERN) and response inhibition (reduced N2 and P3). However, error awareness and compensatory post-error adjustment processes (error correction) were unaffected. The current pattern of results highlights the role of performance monitoring in the acquisition and maintenance of externalizing harmful behavior that is frequently observed in juvenile offenders. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  13. Alternating phase-shift mask and binary mask for 45-nm node and beyond: the impact on the mask error control

    NASA Astrophysics Data System (ADS)

    Kojima, Yosuke; Shirasaki, Masanori; Chiba, Kazuaki; Tanaka, Tsuyoshi; Inazuki, Yukio; Yoshikawa, Hiroki; Okazaki, Satoshi; Iwase, Kazuya; Ishikawa, Kiichi; Ozawa, Ken

    2007-05-01

    For 45 nm node and beyond, the alternating phase-shift mask (alt. PSM), one of the most expected resolution enhancement technologies (RET) because of its high image contrast and small mask error enhancement factor (MEEF), and the binary mask (BIM) attract attention. Reducing CD and registration errors and defect are their critical issues. As the solution, the new blank for alt. PSM and BIM is developed. The top film of new blank is thin Cr, and the antireflection film and shielding film composed of MoSi are deposited under the Cr film. The mask CD performance is evaluated for through pitch, CD linearity, CD uniformity, global loading, resolution and pattern fidelity, and the blank performance is evaluated for optical density, reflectivity, sheet resistance, flatness and defect level. It is found that the performance of new blank is equal to or better than that of conventional blank in all items. The mask CD performance shows significant improvement. The lithography performance of new blank is confirmed by wafer printing and AIMS measurement. The full dry type alt. PSM has been used as test plate, and the test results show that new blank can almost meet the specifications of pi-0 CD difference, CD uniformity and process margin for 45 nm node. Additionally, the new blank shows the better pattern fidelity than that of conventional blank on wafer. AIMS results are almost same as wafer results except for the narrowest pattern. Considering the result above, this new blank can reduce the mask error factors of alt. PSM and BIM for 45 nm node and beyond.

  14. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    PubMed

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  15. Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng

    2015-08-15

    This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less

  16. The impact of different background errors in the assimilation of satellite radiances and in-situ observational data using WRFDA for three rainfall events over Iran

    NASA Astrophysics Data System (ADS)

    Zakeri, Zeinab; Azadi, Majid; Ghader, Sarmad

    2018-01-01

    Satellite radiances and in-situ observations are assimilated through Weather Research and Forecasting Data Assimilation (WRFDA) system into Advanced Research WRF (ARW) model over Iran and its neighboring area. Domain specific background error based on x and y components of wind speed (UV) control variables is calculated for WRFDA system and some sensitivity experiments are carried out to compare the impact of global background error and the domain specific background errors, both on the precipitation and 2-m temperature forecasts over Iran. Three precipitation events that occurred over the country during January, September and October 2014 are simulated in three different experiments and the results for precipitation and 2-m temperature are verified against the verifying surface observations. Results show that using domain specific background error improves 2-m temperature and 24-h accumulated precipitation forecasts consistently, while global background error may even degrade the forecasts compared to the experiments without data assimilation. The improvement in 2-m temperature is more evident during the first forecast hours and decreases significantly as the forecast length increases.

  17. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  18. Local and global evaluation for remote sensing image segmentation

    NASA Astrophysics Data System (ADS)

    Su, Tengfei; Zhang, Shengwei

    2017-08-01

    In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.

  19. Monitoring of land subsidence and ground fissures in Xian, China 2005-2006: Mapped by sar Interferometry

    USGS Publications Warehouse

    Zhao, C.Y.; Zhang, Q.; Ding, X.-L.; Lu, Z.; Yang, C.S.; Qi, X.M.

    2009-01-01

    The City of Xian, China, has been experiencing significant land subsidence and ground fissure activities since 1960s, which have brought various severe geohazards including damages to buildings, bridges and other facilities. Monitoring of land subsidence and ground fissure activities can provide useful information for assessing the extent of, and mitigating such geohazards. In order to achieve robust Synthetic Aperture Radar Interferometry (InSAR) results, six interferometric pairs of Envisat ASAR data covering 2005–2006 are collected to analyze the InSAR processing errors firstly, such as temporal and spatial decorrelation error, external DEM error, atmospheric error and unwrapping error. Then the annual subsidence rate during 2005–2006 is calculated by weighted averaging two pairs of D-InSAR results with similar time spanning. Lastly, GPS measurements are applied to calibrate the InSAR results and centimeter precision is achieved. As for the ground fissure monitoring, five InSAR cross-sections are designed to demonstrate the relative subsidence difference across ground fissures. In conclusion, the final InSAR subsidence map during 2005–2006 shows four large subsidence zones in Xian hi-tech zones in western, eastern and southern suburbs of Xian City, among which two subsidence cones are newly detected and two ground fissures are deduced to be extended westward in Yuhuazhai subsidence cone. This study shows that the land subsidence and ground fissures are highly correlated spatially and temporally and both are correlated with hi-tech zone construction in Xian during the year of 2005–2006.

  20. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  1. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  2. Boundary overlap for medical image segmentation evaluation

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina

    2017-03-01

    All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.

  3. Corrections of clinical chemistry test results in a laboratory information system.

    PubMed

    Wang, Sihe; Ho, Virginia

    2004-08-01

    The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.

  4. Assessment of the relative merits of a few methods to detect evolutionary trends.

    PubMed

    Laurin, Michel

    2010-12-01

    Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error on slope of all FIC methods is slightly higher than that of phylogenetic generalized least squares (PGLS), SR, and PVR. PGLS performs well, with low Type I error rate, low error on regression coefficient, and power comparable with some FIC methods. Four variants of skewness analysis are examined, and a new method to assess significance of results is presented. However, all have consistently low power, except in rare combinations of trees, trend strength, and distance between final means and bounds. Globally, the results clearly show that FIC-based methods and PGLS are globally better than nonphylogenetic methods and variance partitioning with PVR. FIC methods and PGLS are sensitive to the model of evolution (and, hence, to branch length errors). Our results suggest that regressing raw character contrasts against raw geological age contrasts yields a good combination of power and Type I error rate. New software to facilitate batch analysis is presented.

  5. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  6. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  7. A review on Black-Scholes model in pricing warrants in Bursa Malaysia

    NASA Astrophysics Data System (ADS)

    Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul

    2017-01-01

    This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.

  8. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  9. An empirical examination of WISE/NEOWISE asteroid analysis and results

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2018-11-01

    Asteroid observations by the WISE space telescope and the analysis of those observations by the NEOWISE project have provided more information about the diameter, albedo, and other properties of approximately 164,000 asteroids, more than all other sources combined. The raw data set from this mission will likely be the largest and most important such data on asteroids available for many years. To put this trove of data to productive use, we must understand its strengths and weaknesses, and we need clear and reproducible methods for analyzing the data set. This study critically examines the WISE observational data and the NEOWISE results published in both the original papers and the NASA Planetary Data System (PDS). There seem to be multiple areas where the analysis might benefit from improvement or independent verification. The NEOWISE results were obtained by the application of 10 different modeling methods, many of which are not adequately explained or even defined, to 12 different combinations of WISE band data. More than half of NEOWISE results are based on a single band of data. The majority of curve fits to the data in the NEOWISE results are of poor quality, frequently missing most or all of the data points on which they are based. Complete misses occur for about 30% of single-band results, and among the results derived from the most common multiple-band combinations, about 43% miss all data points in at least one band. The NEOWISE data analysis relies on assumptions that are in many cases inconsistent with each other. A substantial fraction of WISE data was systematically excluded from the NEOWISE analysis. Building on methods developed by Hanuš et al. (2015), I show that error estimates for the WISE observational data were not well characterized, and all observations have true uncertainty at least a factor of 1.3-2.5 times larger than previously described, depending on the band. I also show that the error distribution is not well fit by a normal distribution. These findings are important because the Monte Carlo error-analysis method used by the NEOWISE project depends on both the observational errors and the normal distribution. An empirical comparison of published NEOWISE diameters to those in the literature that were estimated by using radar, occultation, or spacecraft (ROS) measurements shows that, for 129 results involving 105 asteroids, the NEOWISE diameters presented in tables of thermal-modeling results exactly match prior ROS results from the literature. While these are only a tiny fraction (0.06%) of the asteroids analyzed, they are important because they represent the only independent check on NEOWISE diameter accuracy. After removing the exact matches and adding additional ROS results, I find that the accuracy of diameter estimates for NEOWISE results depends strongly on the choice of data bands and on which of the 10 models was used. I show that systematic errors in the diameter estimates are much larger than previously described and range from - 5% to + 23%. In addition, random errors range from - 15% to + 19% when all four WISE bands were used, and from - 39% to + 57% in cases employing only the W2 band. The empirical results presented here show that much work remains to be done in analyzing data from the WISE/NEOWISE mission and interpreting it for asteroid science.

  10. Pharmacogenetic excitation of dorsomedial prefrontal cortex restores fear prediction error.

    PubMed

    Yau, Joanna Oi-Yue; McNally, Gavan P

    2015-01-07

    Pavlovian conditioning involves encoding the predictive relationship between a conditioned stimulus (CS) and an unconditioned stimulus, so that synaptic plasticity and learning is instructed by prediction error. Here we used pharmacogenetic techniques to show a causal relation between activity of rat dorsomedial prefrontal cortex (dmPFC) neurons and fear prediction error. We expressed the excitatory hM3Dq designer receptor exclusively activated by a designer drug (DREADD) in dmPFC and isolated actions of prediction error by using an associative blocking design. Rats were trained to fear the visual CS (CSA) in stage I via pairings with footshock. Then in stage II, rats received compound presentations of visual CSA and auditory CS (CSB) with footshock. This prior fear conditioning of CSA reduced the prediction error during stage II to block fear learning to CSB. The group of rats that received AAV-hSYN-eYFP vector that was treated with clozapine-N-oxide (CNO; 3 mg/kg, i.p.) before stage II showed blocking when tested in the absence of CNO the next day. In contrast, the groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were treated with CNO before stage II training did not show blocking; learning toward CSB was restored. This restoration of prediction error and fear learning was specific to the injection of CNO because groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were injected with vehicle before stage II training did show blocking. These effects were not attributable to the DREADD manipulation enhancing learning or arousal, increasing fear memory strength or asymptotic levels of fear learning, or altering fear memory retrieval. Together, these results identify a causal role for dmPFC in a signature of adaptive behavior: using the past to predict future danger and learning from errors in these predictions. Copyright © 2015 the authors 0270-6474/15/350074-10$15.00/0.

  11. Calculation and Error Analysis of a Digital Elevation Model of Hofsjokull, Iceland from SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Barton, Jonathan S.; Hall, Dorothy K.; Sigurosson, Oddur; Williams, Richard S., Jr.; Smith, Laurence C.; Garvin, James B.

    1999-01-01

    Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.

  12. Calculation and error analysis of a digital elevation model of Hofsjokull, Iceland, from SAR interferometry

    USGS Publications Warehouse

    Barton, Jonathan S.; Hall, Dorothy K.; Sigurðsson, Oddur; Williams, Richard S.; Smith, Laurence C.; Garvin, James B.; Taylor, Susan; Hardy, Janet

    1999-01-01

    Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.

  13. A hybrid frame concealment algorithm for H.264/AVC.

    PubMed

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  14. Evaluation of snow cover and snow depth on the Qinghai-Tibetan Plateau derived from passive microwave remote sensing

    NASA Astrophysics Data System (ADS)

    Dai, Liyun; Che, Tao; Ding, Yongjian; Hao, Xiaohua

    2017-08-01

    Snow cover on the Qinghai-Tibetan Plateau (QTP) plays a significant role in the global climate system and is an important water resource for rivers in the high-elevation region of Asia. At present, passive microwave (PMW) remote sensing data are the only efficient way to monitor temporal and spatial variations in snow depth at large scale. However, existing snow depth products show the largest uncertainties across the QTP. In this study, MODIS fractional snow cover product, point, line and intense sampling data are synthesized to evaluate the accuracy of snow cover and snow depth derived from PMW remote sensing data and to analyze the possible causes of uncertainties. The results show that the accuracy of snow cover extents varies spatially and depends on the fraction of snow cover. Based on the assumption that grids with MODIS snow cover fraction > 10 % are regarded as snow cover, the overall accuracy in snow cover is 66.7 %, overestimation error is 56.1 %, underestimation error is 21.1 %, commission error is 27.6 % and omission error is 47.4 %. The commission and overestimation errors of snow cover primarily occur in the northwest and southeast areas with low ground temperature. Omission error primarily occurs in cold desert areas with shallow snow, and underestimation error mainly occurs in glacier and lake areas. With the increase of snow cover fraction, the overestimation error decreases and the omission error increases. A comparison between snow depths measured in field experiments, measured at meteorological stations and estimated across the QTP shows that agreement between observation and retrieval improves with an increasing number of observation points in a PMW grid. The misclassification and errors between observed and retrieved snow depth are associated with the relatively coarse resolution of PMW remote sensing, ground temperature, snow characteristics and topography. To accurately understand the variation in snow depth across the QTP, new algorithms should be developed to retrieve snow depth with higher spatial resolution and should consider the variation in brightness temperatures at different frequencies emitted from ground with changing ground features.

  15. A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data

    NASA Astrophysics Data System (ADS)

    Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana

    2016-09-01

    A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water vapour content and distribution of the atmosphere, the error in the PWV estimate is likely to stay below ≈ 3 %, even for dry conditions.

  16. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, T; Kumaraswamy, L

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less

  17. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  18. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  19. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  20. Fast online generalized multiscale finite element method using constraint energy minimization

    NASA Astrophysics Data System (ADS)

    Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat

    2018-02-01

    Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

  1. It Pays to Go Off-Track: Practicing with Error-Augmenting Haptic Feedback Facilitates Learning of a Curve-Tracing Task

    PubMed Central

    Williams, Camille K.; Tremblay, Luc; Carnahan, Heather

    2016-01-01

    Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937

  2. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  3. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  4. The role of visual spatial attention in adult developmental dyslexia.

    PubMed

    Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko

    2013-01-01

    The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.

  5. Quantifying errors in trace species transport modeling.

    PubMed

    Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M

    2008-12-16

    One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.

  6. Correcting For Seed-Particle Lag In LV Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.

    1994-01-01

    Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.

  7. Research on correction algorithm of laser positioning system based on four quadrant detector

    NASA Astrophysics Data System (ADS)

    Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia

    2018-02-01

    This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.

  8. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  9. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  10. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  11. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Multiple-rule bias in the comparison of classification rules

    PubMed Central

    Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.

    2011-01-01

    Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390

  13. Peripheral dysgraphia characterized by the co-occurrence of case substitutions in uppercase and letter substitutions in lowercase writing.

    PubMed

    Di Pietro, M; Schnider, A; Ptak, R

    2011-10-01

    Patients with peripheral dysgraphia due to impairment at the allographic level produce writing errors that affect the letter-form and are characterized by case confusions or the failure to write in a specific case or style (e.g., cursive). We studied the writing errors of a patient with pure peripheral dysgraphia who had entirely intact oral spelling, but produced many well-formed letter errors in written spelling. The comparison of uppercase print and lowercase cursive spelling revealed an uncommon pattern: while most uppercase errors were case substitutions (e.g., A - a), almost all lowercase errors were letter substitutions (e.g., n - r). Analyses of the relationship between target letters and substitution errors showed that errors were neither influenced by consonant-vowel status nor by letter frequency, though word length affected error frequency in lowercase writing. Moreover, while graphomotor similarity did not predict either the occurrence of uppercase or lowercase errors, visuospatial similarity was a significant predictor of lowercase errors. These results suggest that lowercase representations of cursive letter-forms are based on a description of entire letters (visuospatial features) and are not - as previously found for uppercase letters - specified in terms of strokes (graphomotor features). Copyright © 2010 Elsevier Srl. All rights reserved.

  14. Dosimetry audit simulation of treatment planning system in multicenters radiotherapy

    NASA Astrophysics Data System (ADS)

    Kasmuri, S.; Pawiro, S. A.

    2017-07-01

    Treatment Planning System (TPS) is an important modality that determines radiotherapy outcome. TPS requires input data obtained through commissioning and the potentially error occurred. Error in this stage may result in the systematic error. The aim of this study to verify the TPS dosimetry to know deviation range between calculated and measurement dose. This study used CIRS phantom 002LFC representing the human thorax and simulated all external beam radiotherapy stages. The phantom was scanned using CT Scanner and planned 8 test cases that were similar to those in clinical practice situation were made, tested in four radiotherapy centers. Dose measurement using 0.6 cc ionization chamber. The results of this study showed that generally, deviation of all test cases in four centers was within agreement criteria with average deviation about -0.17±1.59 %, -1.64±1.92 %, 0.34±1.34 % and 0.13±1.81 %. The conclusion of this study was all TPS involved in this study showed good performance. The superposition algorithm showed rather poor performance than either analytic anisotropic algorithm (AAA) and convolution algorithm with average deviation about -1.64±1.92 %, -0.17±1.59 % and -0.27±1.51 % respectively.

  15. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  16. SU-F-T-383: Robustness for Patient Setup Error in Total Body Irradiation Using Volumetric Modulated Arc Therapy (VMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H

    Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less

  17. State space truncation with quantified errors for accurate solutions to discrete Chemical Master Equation

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653

  18. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  19. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-22

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  20. The fitness cost of mis-splicing is the main determinant of alternative splicing patterns.

    PubMed

    Saudemont, Baptiste; Popa, Alexandra; Parmley, Joanna L; Rocher, Vincent; Blugeon, Corinne; Necsulea, Anamaria; Meyer, Eric; Duret, Laurent

    2017-10-30

    Most eukaryotic genes are subject to alternative splicing (AS), which may contribute to the production of protein variants or to the regulation of gene expression via nonsense-mediated messenger RNA (mRNA) decay (NMD). However, a fraction of splice variants might correspond to spurious transcripts and the question of the relative proportion of splicing errors to functional splice variants remains highly debated. We propose a test to quantify the fraction of AS events corresponding to errors. This test is based on the fact that the fitness cost of splicing errors increases with the number of introns in a gene and with expression level. We analyzed the transcriptome of the intron-rich eukaryote Paramecium tetraurelia. We show that in both normal and in NMD-deficient cells, AS rates strongly decrease with increasing expression level and with increasing number of introns. This relationship is observed for AS events that are detectable by NMD as well as for those that are not, which invalidates the hypothesis of a link with the regulation of gene expression. Our results show that in genes with a median expression level, 92-98% of observed splice variants correspond to errors. We observed the same patterns in human transcriptomes and we further show that AS rates correlate with the fitness cost of splicing errors. These observations indicate that genes under weaker selective pressure accumulate more maladaptive substitutions and are more prone to splicing errors. Thus, to a large extent, patterns of gene expression variants simply reflect the balance between selection, mutation, and drift.

  1. The neural basis of impaired self-awareness after traumatic brain injury

    PubMed Central

    Ham, Timothy E.; Bonnelle, Valerie; Hellyer, Peter; Jilka, Sagar; Robertson, Ian H.; Leech, Robert

    2014-01-01

    Self-awareness is commonly impaired after traumatic brain injury. This is an important clinical issue as awareness affects long-term outcome and limits attempts at rehabilitation. It can be investigated by studying how patients respond to their errors and monitor their performance on tasks. As awareness is thought to be an emergent property of network activity, we tested the hypothesis that impaired self-awareness is associated with abnormal brain network function. We investigated a group of subjects with traumatic brain injury (n = 63) split into low and high performance-monitoring groups based on their ability to recognize and correct their own errors. Brain network function was assessed using resting-state and event-related functional magnetic resonance imaging. This allowed us to investigate baseline network function, as well as the evoked response of networks to specific events including errors. The low performance-monitoring group underestimated their disability and showed broad attentional deficits. Neural activity within what has been termed the fronto-parietal control network was abnormal in patients with impaired self-awareness. The dorsal anterior cingulate cortex is a key part of this network that is involved in performance-monitoring. This region showed reduced functional connectivity to the rest of the fronto-parietal control network at ‘rest’. In addition, the anterior insulae, which are normally tightly linked to the dorsal anterior cingulate cortex, showed increased activity following errors in the impaired group. Interestingly, the traumatic brain injury patient group with normal performance-monitoring showed abnormally high activation of the right middle frontal gyrus, putamen and caudate in response to errors. The impairment of self-awareness was not explained either by the location of focal brain injury, or the amount of traumatic axonal injury as demonstrated by diffusion tensor imaging. The results suggest that impairments of self-awareness after traumatic brain injury result from breakdown of functional interactions between nodes within the fronto-parietal control network. PMID:24371217

  2. The neural basis of impaired self-awareness after traumatic brain injury.

    PubMed

    Ham, Timothy E; Bonnelle, Valerie; Hellyer, Peter; Jilka, Sagar; Robertson, Ian H; Leech, Robert; Sharp, David J

    2014-02-01

    Self-awareness is commonly impaired after traumatic brain injury. This is an important clinical issue as awareness affects long-term outcome and limits attempts at rehabilitation. It can be investigated by studying how patients respond to their errors and monitor their performance on tasks. As awareness is thought to be an emergent property of network activity, we tested the hypothesis that impaired self-awareness is associated with abnormal brain network function. We investigated a group of subjects with traumatic brain injury (n = 63) split into low and high performance-monitoring groups based on their ability to recognize and correct their own errors. Brain network function was assessed using resting-state and event-related functional magnetic resonance imaging. This allowed us to investigate baseline network function, as well as the evoked response of networks to specific events including errors. The low performance-monitoring group underestimated their disability and showed broad attentional deficits. Neural activity within what has been termed the fronto-parietal control network was abnormal in patients with impaired self-awareness. The dorsal anterior cingulate cortex is a key part of this network that is involved in performance-monitoring. This region showed reduced functional connectivity to the rest of the fronto-parietal control network at 'rest'. In addition, the anterior insulae, which are normally tightly linked to the dorsal anterior cingulate cortex, showed increased activity following errors in the impaired group. Interestingly, the traumatic brain injury patient group with normal performance-monitoring showed abnormally high activation of the right middle frontal gyrus, putamen and caudate in response to errors. The impairment of self-awareness was not explained either by the location of focal brain injury, or the amount of traumatic axonal injury as demonstrated by diffusion tensor imaging. The results suggest that impairments of self-awareness after traumatic brain injury result from breakdown of functional interactions between nodes within the fronto-parietal control network.

  3. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration

    PubMed Central

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-01-01

    Abstract Background Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. Methods We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Results Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Conclusions Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present. PMID:29088358

  4. The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.

    PubMed

    Hutton, Kevin; Ding, Qian; Wellman, Gregory

    2017-02-24

    The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.

  5. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response.

    PubMed

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2014-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.

  6. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    PubMed Central

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2015-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058

  7. An Application of Semi-parametric Estimator with Weighted Matrix of Data Depth in Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Pan, X. G.; Wang, J. Q.; Zhou, H. Y.

    2013-05-01

    The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.

  8. Refractive Errors Affect the Vividness of Visual Mental Images

    PubMed Central

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186

  9. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder

    PubMed Central

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD. PMID:29075227

  10. Correction of a Technical Error in the Golf Swing: Error Amplification Versus Direct Instruction.

    PubMed

    Milanese, Chiara; Corte, Stefano; Salvetti, Luca; Cavedon, Valentina; Agostini, Tiziano

    2016-01-01

    Performance errors drive motor learning for many tasks. The authors' aim was to determine which of two strategies, method of amplification of error (MAE) or direct instruction (DI), would be more beneficial for error correction during a full golfing swing with a driver. Thirty-four golfers were randomly assigned to one of three training conditions (MAE, DI, and control). Participants were tested in a practice session in which each golfer performed 7 pretraining trials, 6 training-intervention trials, and 7 posttraining trials; and a retention test after 1 week. An optoeletronic motion capture system was used to measure the kinematic parameters of each golfer's performance. Results showed that MAE is an effective strategy for correcting the technical errors leading to a rapid improvement in performance. These findings could have practical implications for sport psychology and physical education because, while practice is obviously necessary for improving learning, the efficacy of the learning process is essential in enhancing learners' motivation and sport enjoyment.

  11. Precise method of compensating radiation-induced errors in a hot-cathode-ionization gauge with correcting electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp

    2014-10-06

    To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method wasmore » approximately less than several percent in the pressure range from 10{sup −5} Pa to 10{sup −8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.« less

  12. Refractive errors affect the vividness of visual mental images.

    PubMed

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception.

  13. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    PubMed

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  14. Neural network-based model reference adaptive control system.

    PubMed

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  15. Long-term academic stress increases the late component of error processing: an ERP study.

    PubMed

    Wu, Jianhui; Yuan, Yiran; Duan, Hongxia; Qin, Shaozheng; Buchanan, Tony W; Zhang, Kan; Zhang, Liang

    2014-05-01

    Exposure to long-term stress has a variety of consequences on the brain and cognition. Few studies have examined the influence of long-term stress on event related potential (ERP) indices of error processing. The current study investigated how long-term academic stress modulates the error related negativity (Ne or ERN) and the error positivity (Pe) components of error processing. Forty-one male participants undergoing preparation for a major academic examination and 20 non-exam participants completed a Go-NoGo task while ERP measures were collected. The exam group reported higher perceived stress levels and showed increased Pe amplitude compared with the non-exam group. Participants' rating of the importance of the exam was positively associated with the amplitude of Pe, but these effects were not found for the Ne/ERN. These results suggest that long-term academic stress leads to greater motivational assessment of and higher emotional response to errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Feature Migration in Time: Reflection of Selective Attention on Speech Errors

    PubMed Central

    Nozari, Nazbanou; Dell, Gary S.

    2012-01-01

    This paper describes an initial study of the effect of focused attention on phonological speech errors. In three experiments, participants recited four-word tongue-twisters, and focused attention on one (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to either avoid errors on the attended word, to stress it, or to say it silently. The experiments showed that all methods of attending to a word decreased errors on that word, while increasing errors on the surrounding words. However, this error increase did not result from a relative increase in phonemic migrations originating from the attended word. This pattern is inconsistent with conceptualizing attention either as higher activation of the attended word or greater inhibition of the unattended words throughout the production of the sequence. Instead, it is consistent with a model which presumes that attention exerts its effect at the time of production of the attended word, without lingering effects on the past or the future. PMID:22268910

  17. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  18. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    PubMed Central

    Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija

    2018-01-01

    The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918

  19. Demonstrating Starshade Performance as Part of NASA's Technology Development for Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Kasdin, N. Jeremy; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M. W.; Walkemeyer, P. E.; Bach, V. M.; Oakes, E.; Cady, E. J.; Martin, S. R.; Marchen, L. F.; Macintosh, B.; Rudd, R.; Mikula, J. A.; Lynch, D. H.

    2012-01-01

    In this poster we describe the results of our project to design, manufacture, and measure a prototype starshade petal as part of the Technology Development for Exoplanet Missions program. An external occult is a satellite employing a large screen, or starshade,that flies in formation with a spaceborne telescope to provide the starlight suppression needed for detecting and characterizing exoplanets. Among the advantages of using an occulter are the broadband allowed for characterization and the removal of light for the observatory, greatly relaxing the requirements on the telescope and instrument. In this first two-year phase we focused on the key requirement of manufacturing a precision petal with the precise tolerances needed to meet the overall error budget. These tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We show the results of this analysis and a representative error budget. We also present the final manufactured occulter petal and the metrology on its shape that demonstrates it meets requirements. We show that a space occulter built of petals with the same measured shape would achieve better than 1e-9 contrast. We also show our progress in building and testing sample edges with the sharp radius of curvature needed for limiting solar glint. Finally, we describe our plans for the second TDEM phase.

  20. Comparison of different spatial transformations applied to EEG data: A case study of error processing.

    PubMed

    Cohen, Michael X

    2015-09-01

    The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Testing the white dwarf mass-radius relation and comparing optical and far-UV spectroscopic results with Gaia DR2, HST and FUSE

    NASA Astrophysics Data System (ADS)

    Joyce, S. R. G.; Barstow, M. A.; Casewell, S. L.; Burleigh, M. R.; Holberg, J. B.; Bond, H. E.

    2018-05-01

    Observational tests of the white dwarf mass-radius relationship have always been limited by the uncertainty in the available distance measurements. Most studies have focused on Balmer line spectroscopy because these spectra can be obtained from ground based observatories, while the Lyman lines are only accessible to space based UV telescopes. We present results using parallax data from Gaia DR2 combined with space based spectroscopy from HST and FUSE covering the Balmer and Lyman lines. We find that our sample supports the theoretical relation, although there is at least one star which is shown to be inconsistent. Comparison of results between Balmer and Lyman line spectra shows they are in agreement when the latest broadening tables are used. We also assess the factors which contribute to the error in the mass-radius calculations and confirm the findings of other studies which show that the spread in results for targets where multiple spectra are available is larger than the statistical error. The uncertainty in the spectroscopically derived log g parameter is now the main source of error rather than the parallax. Finally, we present new results for the radius and spectroscopic mass of Sirius B which agree with the dynamical mass and mass-radius relation within 1σ.

  2. Sustained Attention is Associated with Error Processing Impairment: Evidence from Mental Fatigue Study in Four-Choice Reaction Time Task

    PubMed Central

    Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang

    2015-01-01

    Attention is important in error processing. Few studies have examined the link between sustained attention and error processing. In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing. Forty-one recruited participants were divided into two groups. In the fatigue experiment group, 20 subjects performed a fatigue experiment and an additional continuous psychomotor vigilance test (PVT) for 1 h. In the normal experiment group, 21 subjects only performed the normal experimental procedures without the PVT test. Fatigue and sustained attention states were assessed with a questionnaire. Event-related potential results showed that ERN (p < 0.005) and peak (p < 0.05) mean amplitudes decreased in the fatigue experiment. ERN amplitudes were significantly associated with the attention and fatigue states in electrodes Fz, FC1, Cz, and FC2. These findings indicated that sustained attention was related to error processing and that decreased attention is likely the cause of error processing impairment. PMID:25756780

  3. Errors Detection by 5- to 8-Year-Olds Listening to a Wrong French Sequence of Number Words: Music before Lyrics?

    ERIC Educational Resources Information Center

    Gauderat-Bagault, Laurence; Lehalle, Henri

    Children, ages 5 to 8 years (n=71), were required to listen and detect errors out of a partly wrong sequence of tape-recorded French number words from 1 to 100. Children (from several schools near Montpellier, France) were from preschool, grade 1, and grade 2. Results show that wrong syntactic rules were better detected than omissions, whereas…

  4. On the convergence of local approximations to pseudodifferential operators with applications

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1994-01-01

    We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.

  5. Sliceable transponders for metro-access transmission links

    NASA Astrophysics Data System (ADS)

    Wagner, C.; Madsen, P.; Spolitis, S.; Vegas Olmos, J. J.; Tafur Monroy, I.

    2015-01-01

    This paper presents a solution for upgrading optical access networks by reusing existing electronics or optical equipment: sliceable transponders using signal spectrum slicing and stitching back method after direct detection. This technique allows transmission of wide bandwidth signals from the service provider (OLT - optical line terminal) to the end user (ONU - optical network unit) over an optical distribution network (ODN) via low bandwidth equipment. We show simulation and experimental results for duobinary signaling of 1 Gbit/s and 10 Gbit/s waveforms. The number of slices is adjusted to match the lowest analog bandwidth of used electrical devices and scale from 2 slices to 10 slices. Results of experimental transmission show error free signal recovery by using post forward error correction with 7% overhead.

  6. Theoretical Bounds of Direct Binary Search Halftoning.

    PubMed

    Liao, Jan-Ray

    2015-11-01

    Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.

  7. A simple but powerful test of perseverative search in dogs and toddlers.

    PubMed

    Péter, András; Gergely, Anna; Topál, József; Miklósi, Ádám; Pongrácz, Péter

    2015-01-01

    Perseverative (A-not-B) errors during the search of a hidden object were recently described in both dogs and 10-month-old infants. It was found that ostensive cues indicating a communicative intent of the person who hides the object played a major role in eliciting perseverative errors in both species. However, the employed experimental set-up gave rise to several alternative explanations regarding the source of these errors. Here we present a simplified protocol that eliminates the ambiguities present in the original design. Using five consecutive object hiding events to one of two locations in a fixed order ("AABBA"), we tested adult companion dogs and human children (24 months old). The experimenter performed the hiding actions while giving ostensive cues in each trial and moved the target object to the given location in a straight line. Our results show that in the B trials, both 24-month-old children and dogs could not reliably find the hidden object, and their performance in the first B trials was significantly below that of any of the A trials. These results are the first to show that the tendency for perseverative errors in an ostensive-communicative context is a robust phenomenon among 2-year-old children and dogs, and not the by-product of a topographically elaborate hiding event.

  8. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  9. Ranging error analysis of single photon satellite laser altimetry under different terrain conditions

    NASA Astrophysics Data System (ADS)

    Huang, Jiapeng; Li, Guoyuan; Gao, Xiaoming; Wang, Jianmin; Fan, Wenfeng; Zhou, Shihong

    2018-02-01

    Single photon satellite laser altimeter is based on Geiger model, which has the characteristics of small spot, high repetition rate etc. In this paper, for the slope terrain, the distance of error's formula and numerical calculation are carried out. Monte Carlo method is used to simulate the experiment of different terrain measurements. The experimental results show that ranging accuracy is not affected by the spot size under the condition of the flat terrain, But the inclined terrain can influence the ranging error dramatically, when the satellite pointing angle is 0.001° and the terrain slope is about 12°, the ranging error can reach to 0.5m. While the accuracy can't meet the requirement when the slope is more than 70°. Monte Carlo simulation results show that single photon laser altimeter satellite with high repetition rate can improve the ranging accuracy under the condition of complex terrain. In order to ensure repeated observation of the same point for 25 times, according to the parameters of ICESat-2, we deduce the quantitative relation between the footprint size, footprint, and the frequency repetition. The related conclusions can provide reference for the design and demonstration of the domestic single photon laser altimetry satellite.

  10. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    NASA Astrophysics Data System (ADS)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  11. Does a better model yield a better argument? An info-gap analysis

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2017-04-01

    Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.

  12. Altimetry, Orbits and Tides

    NASA Technical Reports Server (NTRS)

    Colombo, O. L.

    1984-01-01

    The nature of the orbit error and its effect on the sea surface heights calculated with satellite altimetry are explained. The elementary concepts of celestial mechanics required to follow a general discussion of the problem are included. Consideration of errors in the orbits of satellites with precisely repeating ground tracks (SEASAT, TOPEX, ERS-1, POSEIDON, amongst past and future altimeter satellites) are detailed. The theoretical conclusions are illustrated with the numerical results of computer simulations. The nature of the errors in this type of orbits is such that this error can be filtered out by using height differences along repeating (overlapping) passes. This makes them particularly valuable for the study and monitoring of changes in the sea surface, such as tides. Elements of tidal theory, showing how these principles can be combined with those pertinent to the orbit error to make direct maps of the tides using altimetry are presented.

  13. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    PubMed Central

    Hosseinyalamdary, Siavash

    2018-01-01

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119

  14. Path-following in model predictive rollover prevention using front steering and braking

    NASA Astrophysics Data System (ADS)

    Ghazali, Mohammad; Durali, Mohammad; Salarieh, Hassan

    2017-01-01

    In this paper vehicle path-following in the presence of rollover risk is investigated. Vehicles with high centre of mass are prone to roll instability. Untripped rollover risk is increased in high centre of gravity vehicles and high-friction road condition. Researches introduce strategies to handle the short-duration rollover condition. In these researches, however, trajectory tracking is affected and not thoroughly investigated. This paper puts stress on tracking error from rollover prevention. A lower level model predictive front steering controller is adopted to deal with rollover and tracking error as a priority sequence. A brake control is included in lower level controller which directly obeys an upper level controller (ULC) command. The ULC manages vehicle speed regarding primarily tracking error. Simulation results show that the proposed control framework maintains roll stability while tracking error is confined to predefined error limit.

  15. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points

  16. An Ensemble Method for Spelling Correction in Consumer Health Questions

    PubMed Central

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208

  17. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

  18. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    PubMed

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  19. Sensitivity of geographic information system outputs to errors in remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.

    1981-01-01

    The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.

  20. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    PubMed

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  1. The price of complexity in financial networks

    NASA Astrophysics Data System (ADS)

    Battiston, Stefano; Caldarelli, Guido; May, Robert M.; Roukny, Tarik; Stiglitz, Joseph E.

    2016-09-01

    Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.

  2. The price of complexity in financial networks.

    PubMed

    Battiston, Stefano; Caldarelli, Guido; May, Robert M; Roukny, Tarik; Stiglitz, Joseph E

    2016-09-06

    Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.

  3. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  4. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  5. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  6. Influence of precision of emission characteristic parameters on model prediction error of VOCs/formaldehyde from dry building material.

    PubMed

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.

  7. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  8. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Model-based influences on humans' choices and striatal prediction errors.

    PubMed

    Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J

    2011-03-24

    The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. The cerebellum for jocks and nerds alike.

    PubMed

    Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J

    2014-01-01

    Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.

  11. Wave aberrations in rhesus monkeys with vision-induced ametropias

    PubMed Central

    Ramamirtham, Ramkumar; Kee, Chea-su; Hung, Li-Fang; Qiao-Grider, Ying; Huang, Juan; Roorda, Austin; Smith, Earl L.

    2007-01-01

    The purpose of this study was to investigate the relationship between refractive errors and high-order aberrations in infant rhesus monkeys. Specifically, we compared the monochromatic wave aberrations measured with a Shack-Hartman wavefront sensor between normal monkeys and monkeys with vision-induced refractive errors. Shortly after birth, both normal monkeys and treated monkeys reared with optically induced defocus or form deprivation showed a decrease in the magnitude of high-order aberrations with age. However, the decrease in aberrations was typically smaller in the treated animals. Thus, at the end of the lens-rearing period, higher than normal amounts of aberrations were observed in treated eyes, both hyperopic and myopic eyes and treated eyes that developed astigmatism, but not spherical ametropias. The total RMS wavefront error increased with the degree of spherical refractive error, but was not correlated with the degree of astigmatism. Both myopic and hyperopic treated eyes showed elevated amounts of coma and trefoil and the degree of trefoil increased with the degree of spherical ametropia. Myopic eyes also exhibited a much higher prevalence of positive spherical aberration than normal or treated hyperopic eyes. Following the onset of unrestricted vision, the amount of high-order aberrations decreased in the treated monkeys that also recovered from the experimentally induced refractive errors. Our results demonstrate that high-order aberrations are influenced by visual experience in young primates and that the increase in high-order aberrations in our treated monkeys appears to be an optical byproduct of the vision-induced alterations in ocular growth that underlie changes in refractive error. The results from our study suggest that the higher amounts of wave aberrations observed in ametropic humans are likely to be a consequence, rather than a cause, of abnormal refractive development. PMID:17825347

  12. The cerebellum for jocks and nerds alike

    PubMed Central

    Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.

    2014-01-01

    Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338

  13. A complete representation of uncertainties in layer-counted paleoclimatic archives

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2017-09-01

    Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.

  14. Continuous quantum error correction for non-Markovian decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089

    2007-08-15

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less

  15. In vitro physical, chemical, and biological evaluation of commercially available metal orthodontic brackets.

    PubMed

    Kim, Joo Hyoung; Cha, Jung Yul; Hwang, Chung Ju

    2012-12-01

    This in vitro study was undertaken to evaluate the physical, chemical, and biological properties of commercially available metal orthodontic brackets in South Korea, because national standards for these products are lacking. FOUR BRACKET BRANDS WERE TESTED FOR DIMENSIONAL ACCURACY, (MANUFACTURING ERRORS IN ANGULATION AND TORQUE), CYTOTOXICITY, COMPOSITION, ELUTION, AND CORROSION: Archist (Daeseung Medical), Victory (3M Unitek), Kosaka (Tomy), and Confidence (Shinye Odontology Materials). The tested rackets showed no significant differences in manufacturing errors in angulation, but Confidence brackets showed a significant difference in manufacturing errors in torque. None of the brackets were cytotoxic to mouse fibroblasts. The metal ion components did not show a regular increasing or decreasing trend of elution over time, but the volume of the total eluted metal ions increased: Archist brackets had the maximal Cr elution and Confidence brackets appeared to have the largest volume of total eluted metal ions because of excessive Ni elution. Confidence brackets showed the lowest corrosion resistance during potentiodynamic polarization. The results of this study could potentially be applied in establishing national standards for metal orthodontic brackets and in evaluating commercially available products.

  16. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  17. Developing Performance Estimates for High Precision Astrometry with TMT

    NASA Astrophysics Data System (ADS)

    Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana

    2013-12-01

    Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.

  18. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  19. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.

    PubMed

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.

  20. Spelling Errors of Dyslexic Children in Bosnian Language with Transparent Orthography

    ERIC Educational Resources Information Center

    Duranovic, Mirela

    2017-01-01

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors, 10% of orthographic errors, and 4%…

  1. Optimal algorithm to improve the calculation accuracy of energy deposition for betavoltaic MEMS batteries design

    NASA Astrophysics Data System (ADS)

    Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun

    2009-11-01

    Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.

  2. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    PubMed Central

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  3. Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s

    PubMed Central

    Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A

    2004-01-01

    Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M1) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6–4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database. PMID:15447691

  4. Drought Persistence Errors in Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  5. Comparison of survey and photogrammetry methods to position gravity data, Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponce, D.A.; Wu, S.S.C.; Spielman, J.B.

    1985-12-31

    Locations of gravity stations at Yucca Mountain, Nevada, were determined by a survey using an electronic distance-measuring device and by a photogram-metric method. The data from both methods were compared to determine if horizontal and vertical coordinates developed from photogrammetry are sufficently accurate to position gravity data at the site. The results show that elevations from the photogrammetric data have a mean difference of 0.57 +- 0.70 m when compared with those of the surveyed data. Comparison of the horizontal control shows that the two methods agreed to within 0.01 minute. At a latitude of 45{sup 0}, an error ofmore » 0.01 minute (18 m) corresponds to a gravity anomaly error of 0.015 mGal. Bouguer gravity anomalies are most sensitive to errors in elevation, thus elevation is the determining factor for use of photogrammetric or survey methods to position gravity data. Because gravity station positions are difficult to locate on aerial photographs, photogrammetric positions are not always exactly at the gravity station; therefore, large disagreements may appear when comparing electronic and photogrammetric measurements. A mean photogrammetric elevation error of 0.57 m corresponds to a gravity anomaly error of 0.11 mGal. Errors of 0.11 mGal are too large for high-precision or detailed gravity measurements but acceptable for regional work. 1 ref. 2 figs., 4 tabs.« less

  6. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    PubMed

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Extending High-Order Flux Operators on Spherical Icosahedral Grids and Their Applications in the Framework of a Shallow Water Model

    NASA Astrophysics Data System (ADS)

    Zhang, Yi

    2018-01-01

    This study extends a set of unstructured third/fourth-order flux operators on spherical icosahedral grids from two perspectives. First, the fifth-order and sixth-order flux operators of this kind are further extended, and the nominally second-order to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the standard fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. Even-order operators show higher limiter sensitivity than the odd-order operators. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second-order and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, high-order flux operators with higher damping coefficients, which essentially behave like the Anticipated Potential Vorticity Method, present better results.

  8. Absorbance and fluorometric sensing with capillary wells microplates.

    PubMed

    Tan, Han Yen; Cheong, Brandon Huey-Ping; Neild, Adrian; Liew, Oi Wah; Ng, Tuck Wah

    2010-12-01

    Detection and readout from small volume assays in microplates are a challenge. The capillary wells microplate approach [Ng et al., Appl. Phys. Lett. 93, 174105 (2008)] offers strong advantages in small liquid volume management. An adapted design is described and shown here to be able to detect, in a nonimaging manner, fluorescence and absorbance assays minus the error often associated with meniscus forming at the air-liquid interface. The presence of bubbles in liquid samples residing in microplate wells can cause inaccuracies. Pipetting errors, if not adequately managed, can result in misleading data and wrong interpretations of assay results; particularly in the context of high throughput screening. We show that the adapted design is also able to detect for bubbles and pipetting errors during actual assay runs to ensure accuracy in screening.

  9. A novel measure of reliability in Diffusion Tensor Imaging after data rejections due to subject motion.

    PubMed

    Sairanen, V; Kuusela, L; Sipilä, O; Savolainen, S; Vanhatalo, S

    2017-02-15

    Diffusion Tensor Imaging (DTI) is commonly challenged by subject motion during data acquisition, which often leads to corrupted image data. Currently used procedure in DTI analysis is to correct or completely reject such data before tensor estimations, however assessing the reliability and accuracy of the estimated tensor in such situations has evaded previous studies. This work aims to define the loss of data accuracy with increasing image rejections, and to define a robust method for assessing reliability of the result at voxel level. We carried out simulations of every possible sub-scheme (N=1,073,567,387) of Jones30 gradient scheme, followed by confirming the idea with MRI data from four newborn and three adult subjects. We assessed the relative error of the most commonly used tensor estimates for DTI and tractography studies, fractional anisotropy (FA) and the major orientation vector (V1), respectively. The error was estimated using two measures, the widely used electric potential (EP) criteria as well as the rotationally variant condition number (CN). Our results show that CN and EP are comparable in situations with very few rejections, but CN becomes clearly more sensitive to depicting errors when more gradient vectors and images were rejected. The error in FA and V1 was also found depend on the actual FA level in the given voxel; low actual FA levels were related to high relative errors in the FA and V1 estimates. Finally, the results were confirmed with clinical MRI data. This showed that the errors after rejections are, indeed, inhomogeneous across brain regions. The FA and V1 errors become progressively larger when moving from the thick white matter bundles towards more superficial subcortical structures. Our findings suggest that i) CN is a useful estimator of data reliability at voxel level, and ii) DTI preprocessing with data rejections leads to major challenges when assessing brain tissue with lower FA levels, such as all newborn brain, as well as the adult superficial, subcortical areas commonly traced in precise connectivity analyses between cortical regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  11. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    PubMed

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  12. Data driven CAN node reliability assessment for manufacturing system

    NASA Astrophysics Data System (ADS)

    Zhang, Leiming; Yuan, Yong; Lei, Yong

    2017-01-01

    The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.

  13. Constrained motion estimation-based error resilient coding for HEVC

    NASA Astrophysics Data System (ADS)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  14. Assessing Error Awareness as a Mediator of the Relationship between Subjective Concerns and Cognitive Performance in Older Adults

    PubMed Central

    Buckley, Rachel F.; Laming, Gemma; Chen, Li Peng Evelyn; Crole, Alice; Hester, Robert

    2016-01-01

    Objectives Subjective concerns of cognitive decline (SCD) often manifest in older adults who exhibit objectively normal cognitive functioning. This subjective-objective discrepancy is counter-intuitive when mounting evidence suggests that subjective concerns relate to future clinical progression to Alzheimer’s disease, and so possess the potential to be a sensitive early behavioural marker of disease. In the current study, we aimed to determine whether individual variability in conscious awareness of errors in daily life might mediate this subjective-objective relationship. Methods 67 cognitively-normal older adults underwent cognitive, SCD and mood tests, and an error awareness task. Results Poorer error awareness was not found to mediate a relationship between SCD and objective performance. Furthermore, non-clinical levels of depressive symptomatology were a primary driving factor of SCD and error awareness, and significantly mediated a relationship between the two. Discussion We were unable to show that poorer error awareness mediates SCD and cognitive performance in older adults. Our study does suggest, however, that underlying depressive symptoms influence both poorer error awareness and greater SCD severity. Error awareness is thus not recommended as a proxy for SCD, as reduced levels of error awareness do not seem to be reflected by greater SCD. PMID:27832173

  15. Self-identification and empathy modulate error-related brain activity during the observation of penalty shots between friend and foe

    PubMed Central

    Ganesh, Shanti; van Schie, Hein T.; De Bruijn, Ellen R. A.; Bekkering, Harold

    2009-01-01

    The ability to detect and process errors made by others plays an important role is many social contexts. The capacity to process errors is typically found to rely on sites in the medial frontal cortex. However, it remains to be determined whether responses at these sites are driven primarily by action errors themselves or by the affective consequences normally associated with their commission. Using an experimental paradigm that disentangles action errors and the valence of their affective consequences, we demonstrate that sites in the medial frontal cortex (MFC), including the ventral anterior cingulate cortex (vACC) and pre-supplementary motor area (pre-SMA), respond to action errors independent of the valence of their consequences. The strength of this response was negatively correlated with the empathic concern subscale of the Interpersonal Reactivity Index. We also demonstrate a main effect of self-identification by showing that errors committed by friends and foes elicited significantly different BOLD responses in a separate region of the middle anterior cingulate cortex (mACC). These results suggest that the way we look at others plays a critical role in determining patterns of brain activation during error observation. These findings may have important implications for general theories of error processing. PMID:19015079

  16. Patient motion tracking in the presence of measurement errors.

    PubMed

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  17. Spatial serial order processing in schizophrenia.

    PubMed

    Fraser, David; Park, Sohee; Clark, Gina; Yohanna, Daniel; Houk, James C

    2004-10-01

    The aim of this study was to examine serial order processing deficits in 21 schizophrenia patients and 16 age- and education-matched healthy controls. In a spatial serial order working memory task, one to four spatial targets were presented in a randomized sequence. Subjects were required to remember the locations and the order in which the targets were presented. Patients showed a marked deficit in ability to remember the sequences compared with controls. Increasing the number of targets within a sequence resulted in poorer memory performance for both control and schizophrenia subjects, but the effect was much more pronounced in the patients. Targets presented at the end of a long sequence were more vulnerable to memory error in schizophrenia patients. Performance deficits were not attributable to motor errors, but to errors in target choice. The results support the idea that the memory errors seen in schizophrenia patients may be due to saturating the working memory network at relatively low levels of memory load.

  18. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  19. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  20. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  1. The immediate effects of therapeutic keyboard music playing for finger training in adults undergoing hand rehabilitation.

    PubMed

    Zhang, Xiaoying; Liu, Songhuai; Yang, Degang; Du, Liangjie; Wang, Ziyuan

    2016-08-01

    [Purpose] The purpose of this study was to examine the immediate effects of therapeutic keyboard music playing on the finger function of subjects' hands through measurements of the joint position error test, surface electromyography, probe reaction time, and writing time. [Subjects and Methods] Ten subjects were divided randomly into experimental and control groups. The experimental group used therapeutic keyboard music playing and the control group used grip training. All subjects were assessed and evaluated by the joint position error test, surface electromyography, probe reaction time, and writing time. [Results] After accomplishing therapeutic keyboard music playing and grip training, surface electromyography of the two groups showed no significant change, but joint position error test, probe reaction time, and writing time obviously improved. [Conclusion] These results suggest that therapeutic keyboard music playing is an effective and novel treatment for improving joint position error test scores, probe reaction time, and writing time, and it should be promoted widely in clinics.

  2. Real-time orbit estimation for ATS-6 from redundant attitude sensors

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.

    1975-01-01

    A program installed in the ATSOCC on-line computer operates with attitude sensor data to produce a smoothed real-time orbit estimate. This estimate is obtained from a Kalman filter which enables the estimate to be maintained in the absence of T/M data. The results are described of analytical and numerical investigations into the sensitivity of Control Center output to the position errors resulting from the real-time estimation. The results of the numerical investigation, which used several segments of ATS-6 data gathered during the Sensor Data Acquisition run on August 19, 1974, show that the implemented system can achieve absolute position determination with an error of about 100 km, implying pointing errors of less than 0.2 deg in latitude and longitude. This compares very favorably with ATS-6 specifications of approximately 0.5 deg in latitude-longitude.

  3. Central fabrication: Carved positive assessment

    PubMed Central

    Sanders, Joan E; Severance, Michael R; Myers, Timothy R; Ciol, Marcia A

    2015-01-01

    In this research we investigated the degree of error during the carving phase of central fabrication of prosthetic sockets for people with limb amputation. Three different model shapes were ordered from each of ten central fabrication companies. Using an accurate custom mechanical digitizer and alignment algorithm, we digitized the models and then compared the model shapes with the electronic file shapes. Results showed that 24 of the 30 models had volumes larger than the electronic file shapes while 24 had volumes that were smaller. 29 of the 30 models were oversized at the proximal aspect of the tibial tuberosity and undersized at the patellar tendon and popliteal areas. This error would result in a socket that had less tibial tubercle relief than intended in addition to a larger anterior-posterior dimension than desired. Comparison of the model shapes with socket shapes assessed for nine of the companies in a previous study showed that for five of the companies the sockets were relatively undersized over the tibial crest and fibular head. The results indicate that the socket the prosthetist receives will not always fit as planned, and that errors in the carving process are a source of the discrepancies. PMID:21515893

  4. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE PAGES

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-07-14

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  5. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less

  6. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  7. Estimation of sensible and latent heat flux from natural sparse vegetation surfaces using surface renewal

    NASA Astrophysics Data System (ADS)

    Zapata, N.; Martínez-Cob, A.

    2001-12-01

    This paper reports a study undertaken to evaluate the feasibility of the surface renewal method to accurately estimate long-term evaporation from the playa and margins of an endorreic salty lagoon (Gallocanta lagoon, Spain) under semiarid conditions. High-frequency temperature readings were taken for two time lags ( r) and three measurement heights ( z) in order to get surface renewal sensible heat flux ( HSR) values. These values were compared against eddy covariance sensible heat flux ( HEC) values for a calibration period (25-30 July 2000). Error analysis statistics (index of agreement, IA; root mean square error, RMSE; and systematic mean square error, MSEs) showed that the agreement between HSR and HEC improved as measurement height decreased and time lag increased. Calibration factors α were obtained for all analyzed cases. The best results were obtained for the z=0.9 m ( r=0.75 s) case for which α=1.0 was observed. In this case, uncertainty was about 10% in terms of relative error ( RE). Latent heat flux values were obtained by solving the energy balance equation for both the surface renewal ( LESR) and the eddy covariance ( LEEC) methods, using HSR and HEC, respectively, and measurements of net radiation and soil heat flux. For the calibration period, error analysis statistics for LESR were quite similar to those for HSR, although errors were mostly at random. LESR uncertainty was less than 9%. Calibration factors were applied for a validation data subset (30 July-4 August 2000) for which meteorological conditions were somewhat different (higher temperatures and wind speed and lower solar and net radiation). Error analysis statistics for both HSR and LESR were quite good for all cases showing the goodness of the calibration factors. Nevertheless, the results obtained for the z=0.9 m ( r=0.75 s) case were still the best ones.

  8. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  9. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  10. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  11. Correcting groove error in gratings ruled on a 500-mm ruling engine using interferometric control.

    PubMed

    Mi, Xiaotao; Yu, Haili; Yu, Hongzhu; Zhang, Shanwen; Li, Xiaotian; Yao, Xuefeng; Qi, Xiangdong; Bayinhedhig; Wan, Qiuhua

    2017-07-20

    Groove error is one of the most important factors affecting grating quality and spectral performance. To reduce groove error, we propose a new ruling-tool carriage system based on aerostatic guideways. We design a new blank carriage system with double piezoelectric actuators. We also propose a completely closed-loop servo-control system with a new optical measurement system that can control the position of the diamond relative to the blank. To evaluate our proposed methods, we produced several gratings, including an echelle grating with 79  grooves/mm, a grating with 768  grooves/mm, and a high-density grating with 6000  grooves/mm. The results show that our methods effectively reduce groove error in ruled gratings.

  12. Perception of synchronization errors in haptic and visual communications

    NASA Astrophysics Data System (ADS)

    Kameyama, Seiji; Ishibashi, Yutaka

    2006-10-01

    This paper deals with a system which conveys the haptic sensation experimented by a user to a remote user. In the system, the user controls a haptic interface device with another remote haptic interface device while watching video. Haptic media and video of a real object which the user is touching are transmitted to another user. By subjective assessment, we investigate the allowable range and imperceptible range of synchronization error between haptic media and video. We employ four real objects and ask each subject whether the synchronization error is perceived or not for each object in the assessment. Assessment results show that we can more easily perceive the synchronization error in the case of haptic media ahead of video than in the case of the haptic media behind the video.

  13. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  14. An error-tuned model for sensorimotor learning

    PubMed Central

    Sadeghi, Mohsen; Wolpert, Daniel M.

    2017-01-01

    Current models of sensorimotor control posit that motor commands are generated by combining multiple modules which may consist of internal models, motor primitives or motor synergies. The mechanisms which select modules based on task requirements and modify their output during learning are therefore critical to our understanding of sensorimotor control. Here we develop a novel modular architecture for multi-dimensional tasks in which a set of fixed primitives are each able to compensate for errors in a single direction in the task space. The contribution of the primitives to the motor output is determined by both top-down contextual information and bottom-up error information. We implement this model for a task in which subjects learn to manipulate a dynamic object whose orientation can vary. In the model, visual information regarding the context (the orientation of the object) allows the appropriate primitives to be engaged. This top-down module selection is implemented by a Gaussian function tuned for the visual orientation of the object. Second, each module's contribution adapts across trials in proportion to its ability to decrease the current kinematic error. Specifically, adaptation is implemented by cosine tuning of primitives to the current direction of the error, which we show to be theoretically optimal for reducing error. This error-tuned model makes two novel predictions. First, interference should occur between alternating dynamics only when the kinematic errors associated with each oppose one another. In contrast, dynamics which lead to orthogonal errors should not interfere. Second, kinematic errors alone should be sufficient to engage the appropriate modules, even in the absence of contextual information normally provided by vision. We confirm both these predictions experimentally and show that the model can also account for data from previous experiments. Our results suggest that two interacting processes account for module selection during sensorimotor control and learning. PMID:29253869

  15. Pulsed Airborne Lidar Measurements of C02 Column Absorption

    NASA Technical Reports Server (NTRS)

    Abshire, James B.; Riris, Haris; Allan, Graham R.; Weaver, Clark J.; Mao, Jianping; Sun, Xiaoli; Hasselbrack, William E.; Rodriquez, Michael; Browell, Edward V.

    2011-01-01

    We report on airborne lidar measurements of atmospheric CO2 column density for an approach being developed as a candidate for NASA's ASCENDS mission. It uses a pulsed dual-wavelength lidar measurement based on the integrated path differential absorption (IPDA) technique. We demonstrated the approach using the CO2 measurement from aircraft in July and August 2009 over four locations. The results show clear CO2 line shape and absorption signals, which follow the expected changes with aircraft altitude from 3 to 13 km. The 2009 measurements have been analyzed in detail and the results show approx.1 ppm random errors for 8-10 km altitudes and approx.30 sec averaging times. Airborne measurements were also made in 2010 with stronger signals and initial analysis shows approx. 0.3 ppm random errors for 80 sec averaging times for measurements at altitudes> 6 km.

  16. First- and second-language phonological representations in the mental lexicon.

    PubMed

    Sebastian-Gallés, Núria; Rodríguez-Fornells, Antoni; de Diego-Balaguer, Ruth; Díaz, Begoña

    2006-08-01

    Performance-based studies on the psychological nature of linguistic competence can conceal significant differences in the brain processes that underlie native versus nonnative knowledge of language. Here we report results from the brain activity of very proficient early bilinguals making a lexical decision task that illustrates this point. Two groups of Spanish-Catalan early bilinguals (Spanish-dominant and Catalan-dominant) were asked to decide whether a given form was a Catalan word or not. The nonwords were based on real words, with one vowel changed. In the experimental stimuli, the vowel change involved a Catalan-specific contrast that previous research had shown to be difficult for Spanish natives to perceive. In the control stimuli, the vowel switch involved contrasts common to Spanish and Catalan. The results indicated that the groups of bilinguals did not differ in their behavioral and event-related brain potential measurements for the control stimuli; both groups made very few errors and showed a larger N400 component for control nonwords than for control words. However, significant differences were observed for the experimental stimuli across groups: Specifically, Spanish-dominant bilinguals showed great difficulty in rejecting experimental nonwords. Indeed, these participants not only showed very high error rates for these stimuli, but also did not show an error-related negativity effect in their erroneous nonword decisions. However, both groups of bilinguals showed a larger correct-related negativity when making correct decisions about the experimental nonwords. The results suggest that although some aspects of a second language system may show a remarkable lack of plasticity (like the acquisition of some foreign contrasts), first-language representations seem to be more dynamic in their capacity of adapting and incorporating new information.

  17. Integrated navigation fusion strategy of INS/UWB for indoor carrier attitude angle and position synchronous tracking.

    PubMed

    Fan, Qigao; Wu, Yaheng; Hui, Jing; Wu, Lei; Yu, Zhenzhong; Zhou, Lijuan

    2014-01-01

    In some GPS failure conditions, positioning for mobile target is difficult. This paper proposed a new method based on INS/UWB for attitude angle and position synchronous tracking of indoor carrier. Firstly, error model of INS/UWB integrated system is built, including error equation of INS and UWB. And combined filtering model of INS/UWB is researched. Simulation results show that the two subsystems are complementary. Secondly, integrated navigation data fusion strategy of INS/UWB based on Kalman filtering theory is proposed. Simulation results show that FAKF method is better than the conventional Kalman filtering. Finally, an indoor experiment platform is established to verify the integrated navigation theory of INS/UWB, which is geared to the needs of coal mine working environment. Static and dynamic positioning results show that the INS/UWB integrated navigation system is stable and real-time, positioning precision meets the requirements of working condition and is better than any independent subsystem.

  18. Impact of device level faults in a digital avionic processor

    NASA Technical Reports Server (NTRS)

    Suk, Ho Kim

    1989-01-01

    This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.

  19. Difficulties in tracking the long-term global trend in tropical forest area.

    PubMed

    Grainger, Alan

    2008-01-15

    The long-term trend in tropical forest area receives less scrutiny than the tropical deforestation rate. We show that constructing a reliable trend is difficult and evidence for decline is unclear, within the limits of errors involved in making global estimates. A time series for all tropical forest area, using data from Forest Resources Assessments (FRAs) of the United Nations Food and Agriculture Organization, is dominated by three successively corrected declining trends. Inconsistencies between these trends raise questions about their reliability, especially because differences seem to result as much from errors as from changes in statistical design and use of new data. A second time series for tropical moist forest area shows no apparent decline. The latter may be masked by the errors involved, but a "forest return" effect may also be operating, in which forest regeneration in some areas offsets deforestation (but not biodiversity loss) elsewhere. A better monitoring program is needed to give a more reliable trend. Scientists who use FRA data should check how the accuracy of their findings depends on errors in the data.

  20. Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans

    PubMed Central

    Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude

    2013-01-01

    Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894

Top