Science.gov

Sample records for absolute prediction error

  1. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  2. Error mode prediction.

    PubMed

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  3. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  4. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    ERIC Educational Resources Information Center

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their application…

  5. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  6. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°. PMID:26026510

  7. On the Error Sources in Absolute Individual Antenna Calibrations

    NASA Astrophysics Data System (ADS)

    Aerts, Wim; Baire, Quentin; Bilich, Andria; Bruyninx, Carine; Legrand, Juliette

    2013-04-01

    field) multi path errors, both during calibration and later on at the station, absolute sub-millimeter positioning with GPS is not (yet) possible. References [1] G. Wübbena, M. Schmitz, G. Boettcher, C. Schumann, "Absolute GNSS Antenna Calibration with a Robot: Repeatability of Phase Variations, Calibration of GLONASS and Determination of Carrier-to-Noise Pattern", International GNSS Service: Analysis Center workshop, 8-12 May 2006, Darmstadt, Germany. [2] P. Zeimetz, H. Kuhlmann, "On the Accuracy of Absolute GNSS Antenna Calibration and the Conception of a New Anechoic Chamber", FIG Working Week 2008, 14-19 June 2008, Stockholm, Sweden. [3] P. Zeimetz, H. Kuhlmann, L. Wanninger, V. Frevert, S. Schön and K. Strauch, "Ringversuch 2009", 7th GNSS-Antennen-Workshop, 19-20 March 2009, Dresden, Germany.

  8. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  9. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  10. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  11. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  12. Assessing suturing skills in a self-guided learning setting: absolute symmetry error.

    PubMed

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-12-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be assessed by the trainee, is a feasible assessment tool for self-guided learning of suturing skill. Forty-eight undergraduate medical trainees independently practiced suturing and knot tying skills using a benchtop model. Performance on a pretest, posttest, retention test and a transfer test was assessed using (1) the validated final product analysis (FPA), (2) the surgical efficiency score (SES), a combination of the FPA and hand motion analysis and (3) absolute symmetry error, a new measure that assesses the symmetry of the final product. Absolute symmetry error, along with the other objective assessment tools, detected improvements in performance from pretest to posttest (P < 0.05). A battery of correlation analyses indicated that absolute symmetry error correlates moderately with the FPA and SES. The development of valid, reliable and feasible technical skill assessments is needed to ensure all training centers evaluate trainee performance in a standardized fashion. Measures that do not require the use of experts or computers have potential for widespread use. We suggest that absolute symmetry error is a useful approximation of novices' suturing and knot tying performance. Future research should evaluate whether absolute symmetry error can enhance learning when used as a source of feedback during self-guided practice. PMID:19132540

  13. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  14. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  15. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  16. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  17. Absolute analytical prediction of photonic crystal guided mode resonance wavelengths

    SciTech Connect

    Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron L. C.; Kristensen, Anders

    2014-08-18

    A class of photonic crystal resonant reflectors known as guided mode resonant filters are optical structures that are widely used in the field of refractive index sensing, particularly in biosensing. For the purposes of understanding and design, their behavior has traditionally been modeled numerically with methods such as rigorous coupled wave analysis. Here it is demonstrated how the absolute resonance wavelengths of such structures can be predicted by analytically modeling them as slab waveguides in which the propagation constant is determined by a phase matching condition. The model is experimentally verified to be capable of predicting the absolute resonance wavelengths to an accuracy of within 0.75 nm, as well as resonance wavelength shifts due to changes in cladding index within an accuracy of 0.45 nm across the visible wavelength regime in the case where material dispersion is taken into account. Furthermore, it is demonstrated that the model is valid beyond the limit of low grating modulation, for periodically discontinuous waveguide layers, high refractive index contrasts, and highly dispersive media.

  18. Preliminary error budget for the reflected solar instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Astrophysics Data System (ADS)

    Thome, K.; Gubbels, T.; Barnes, R.

    2011-10-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI-traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables. The instrument suite includes emitted infrared spectrometers, global navigation receivers for radio occultation, and reflected solar spectrometers. The measurements will be acquired for a period of five years and will enable follow-on missions to extend the climate record over the decades needed to understand climate change. This work describes a preliminary error budget for the RS sensor. The RS sensor will retrieve at-sensor reflectance over the spectral range from 320 to 2300 nm with 500-m GIFOV and a 100-km swath width. The current design is based on an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm. Reflectance is obtained from the ratio of measurements of radiance while viewing the earth's surface to measurements of irradiance while viewing the sun. The requirement for the RS instrument is that the reflectance must be traceable to SI standards at an absolute uncertainty <0.3%. The calibration approach to achieve the ambitious 0.3% absolute calibration uncertainty is predicated on a reliance on heritage hardware, reduction of sensor complexity, and adherence to detector-based calibration standards. The design above has been used to develop a preliminary error budget that meets the 0.3% absolute requirement. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and

  19. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  20. Perceptual Inference: A Matter of Predictions and Errors.

    PubMed

    Kok, Peter

    2016-09-12

    A recent study finds that separate populations of neurons in inferotemporal cortex code for perceptual predictions and prediction errors, supporting predictive coding theories of perception. PMID:27623264

  1. Spontaneous prediction error generation in schizophrenia.

    PubMed

    Yamashita, Yuichi; Tani, Jun

    2012-01-01

    Goal-directed human behavior is enabled by hierarchically-organized neural systems that process executive commands associated with higher brain areas in response to sensory and motor signals from lower brain areas. Psychiatric diseases and psychotic conditions are postulated to involve disturbances in these hierarchical network interactions, but the mechanism for how aberrant disease signals are generated in networks, and a systems-level framework linking disease signals to specific psychiatric symptoms remains undetermined. In this study, we show that neural networks containing schizophrenia-like deficits can spontaneously generate uncompensated error signals with properties that explain psychiatric disease symptoms, including fictive perception, altered sense of self, and unpredictable behavior. To distinguish dysfunction at the behavioral versus network level, we monitored the interactive behavior of a humanoid robot driven by the network. Mild perturbations in network connectivity resulted in the spontaneous appearance of uncompensated prediction errors and altered interactions within the network without external changes in behavior, correlating to the fictive sensations and agency experienced by episodic disease patients. In contrast, more severe deficits resulted in unstable network dynamics resulting in overt changes in behavior similar to those observed in chronic disease patients. These findings demonstrate that prediction error disequilibrium may represent an intrinsic property of schizophrenic brain networks reporting the severity and variability of disease symptoms. Moreover, these results support a systems-level model for psychiatric disease that features the spontaneous generation of maladaptive signals in hierarchical neural networks. PMID:22666398

  2. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  3. Minimum mean absolute error estimation over the class of generalized stack filters

    NASA Astrophysics Data System (ADS)

    Lin, Jean-Hsang; Coyle, Edward J.

    1990-04-01

    A class of sliding window operators called generalized stack filters is developed. This class of filters, which includes all rank order filters, stack filters, and digital morphological filters, is the set of all filters possessing the threshold decomposition architecture and a consistency property called the stacking property. Conditions under which these filters possess the weak superposition property known as threshold decomposition are determined. An algorithm is provided for determining a generalized stack filter which minimizes the mean absolute error (MAE) between the output of the filter and a desired input signal, given noisy observations of that signal. The algorithm is a linear program whose complexity depends on the window width of the filter and the number of threshold levels observed by each of the filters in the superposition architecture. The results show that choosing the generalized stack filter which minimizes the MAE is equivalent to massively parallel threshold-crossing decision making when the decisions are consistent with each other.

  4. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  5. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  6. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  7. Conditional Standard Error of Measurement in Prediction.

    ERIC Educational Resources Information Center

    Woodruff, David

    1990-01-01

    A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…

  8. Theoretical prediction of relative and absolute pKa values of aminopyridines.

    PubMed

    Caballero, N A; Melendez, F J; Muñoz-Caro, C; Niño, A

    2006-11-20

    This work presents a study aimed at the theoretical prediction of pK(a) values of aminopyridines, as a factor responsible for the activity of these compounds as blockers of the voltage-dependent K(+) channels. To cover a large range of pK(a) values, a total of seven substituted pyridines is considered as a calibration set: pyridine, 2-aminopyridine, 3-aminopyridine, 4-aminopyridine, 2-chloropyridine, 3-chloropyridine, and 4-methylpirydine. Using ab initio G1, G2 and G3 extrapolation methods, and the CPCM variant of the Polarizable Continuum Model for solvation, we calculate gas phase and solvation free energies. pK(a) values are obtained from these data using a thermodynamic cycle for describing protonation in aqueous and gas phases. The results show that the relatively inexpensive G1 level of theory is the most accurate at predicting pK(a) values in aminopyridines. The highest standard deviation with respect to the experimental data is 0.69 pK(a) units for absolute values calculations. The difference increases slightly to 0.74 pK(a) units when the pK(a) is computed relative to the pyridine molecule. Considering only compounds at least as basic as pyridine (the values of interest for bioactive aminopyridines) the error falls to 0.10 and 0.12 pK(a) units for the absolute and relative computations, respectively. The technique can be used to predict the effect of electronegative substituents in the pK(a) of 4-AP, the most active aminopyridine considered in this work. Thus, 2-chloro and 3-chloro-4-aminopyridine are taken into account. The results show a decrease of the pK(a), suggesting that these compounds are less active than 4-AP at blocking the K(+) channel. PMID:16844281

  9. Error budget for a calibration demonstration system for the reflected solar instrument for the climate absolute radiance and refractivity observatory

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-09-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  10. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  11. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  12. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  13. Dopamine neurons share common response function for reward prediction error

    PubMed Central

    Eshel, Neir; Tian, Ju; Bukwich, Michael; Uchida, Naoshige

    2016-01-01

    Dopamine neurons are thought to signal reward prediction error, or the difference between actual and predicted reward. How dopamine neurons jointly encode this information, however, remains unclear. One possibility is that different neurons specialize in different aspects of prediction error; another is that each neuron calculates prediction error in the same way. We recorded from optogenetically-identified dopamine neurons in the lateral ventral tegmental area (VTA) while mice performed classical conditioning tasks. Our tasks allowed us to determine the full prediction error functions of dopamine neurons and compare them to each other. We found striking homogeneity among individual dopamine neurons: their responses to both unexpected and expected rewards followed the same function, just scaled up or down. As a result, we could describe both individual and population responses using just two parameters. Such uniformity ensures robust information coding, allowing each dopamine neuron to contribute fully to the prediction error signal. PMID:26854803

  14. On the construction of the prediction error covariance matrix

    SciTech Connect

    Waseda, T; Jameson, L; Yaremchuk, M; Mitsudera, H

    2001-02-02

    Implementation of a full Kalman filtering scheme in a large OGCM is unrealistic without simplification and one generally reduces the degrees of freedom of the system by prescribing the structure of the prediction error. However, reductions are often made without any objective measure of their appropriateness. In this report, we present results from an ongoing effort to best construct the prediction error capturing the essential ingredients of the system error that includes both a correlated (global) error and a relatively uncorrelated (local) error. The former will be captured by an EOF modes of the model variance whereas the latter can be detected by wavelet analysis.

  15. Differences between absolute and predicted values of forced expiratory volumes to classify ventilatory impairment in chronic obstructive pulmonary disease.

    PubMed

    Checkley, William; Foreman, Marilyn G; Bhatt, Surya P; Dransfield, Mark T; Han, MeiLan; Hanania, Nicola A; Hansel, Nadia N; Regan, Elizabeth A; Wise, Robert A

    2016-02-01

    The Global Initiative for Chronic Obstructive Lung Disease (GOLD) severity criterion for COPD is used widely in clinical and research settings; however, it requires the use of ethnic- or population-specific reference equations. We propose two alternative severity criteria based on absolute post-bronchodilator FEV1 values (FEV1 and FEV1/height2) that do not depend on reference equations. We compared the accuracy of these classification schemasto those based on % predicted values (GOLD criterion) and Z-scores of post-bronchodilator FEV1 to predict COPD-related functional outcomes or percent emphysema by computerized tomography of the lung. We tested the predictive accuracy of all severity criteria for the 6-minute walk distance (6MWD), St. George's Respiratory Questionnaire (SGRQ), 36-item Short-Form Health Survey physical health component score (SF-36) and the MMRC Dyspnea Score. We used 10-fold cross-validation to estimate average prediction errors and Bonferroni-adjusted t-tests to compare average prediction errors across classification criteria. We analyzed data of 3772 participants with COPD (average age 63 years, 54% male). Severity criteria based on absolute post-bronchodilator FEV1 or FEV1/height2 yielded similar prediction errors for 6MWD, SGRQ, SF-36 physical health component score, and the MMRC Dyspnea Score when compared to the GOLD criterion (all p > 0.34); and, had similar predictive accuracy when compared with the Z-scores criterion, with the exception for 6MWD where post-bronchodilator FEV1 appeared to perform slightly better than Z-scores (p = 0.01). Subgroup analyses did not identify differences across severity criteria by race, sex, or age between absolute values and the GOLD criterion or one based on Z-scores. Severity criteria for COPD based on absolute values of post-bronchodilator FEV1 performed equally as well as did criteria based on predicted values when benchmarked against COPD-related functional and structural outcomes, are simple to use

  16. Large eddy simulation predictions of absolutely unstable round hot jet

    NASA Astrophysics Data System (ADS)

    Boguslawski, A.; Tyliszczak, A.; Wawrzak, K.

    2016-02-01

    The paper presents a novel view on the absolute instability phenomenon in heated variable density round jets. As known from literature the global instability mechanism in low density jets is released when the density ratio is lower than a certain critical value. The existence of the global modes was confirmed by an experimental evidence in both hot and air-helium jets. However, some differences in both globally unstable flows were observed concerning, among others, a level of the critical density ratio. The research is performed using the Large Eddy Simulation (LES) method with a high-order numerical code. An analysis of the LES results revealed that the inlet conditions for the velocity and density distributions at the nozzle exit influence significantly the critical density ratio and the global mode frequency. Two inlet velocity profiles were analyzed, i.e., the hyperbolic tangent and the Blasius profiles. It was shown that using the Blasius velocity profile and the uniform density distribution led to a significantly better agreement with the universal scaling law for global mode frequency.

  17. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly

  18. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  19. Predicting Error Bars for QSAR Models

    SciTech Connect

    Schroeter, Timon; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  20. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  1. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire R.; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  2. Determination and Modeling of Error Densities in Ephemeris Prediction

    SciTech Connect

    Jones, J.P.; Beckerman, M.

    1999-02-07

    The authors determined error densities of ephemeris predictions for 14 LEO satellites. The empirical distributions are not inconsistent with the hypothesis of a Gaussian distribution. The growth rate of radial errors are most highly correlated with eccentricity ({vert_bar}r{vert_bar} = 0.63, {alpha} < 0.05). The growth rate of along-track errors is most highly correlated with the decay rate of the semimajor axis ({vert_bar}r{vert_bar} = 0.97; {alpha} < 0.01).

  3. Encoding of Sensory Prediction Errors in the Human Cerebellum

    PubMed Central

    Schlerf, John; Ivry, Richard B.; Diedrichsen, Jörn

    2015-01-01

    A central tenet of motor neuroscience is that the cerebellum learns from sensory prediction errors. Surprisingly, neuroimaging studies have not revealed definitive signatures of error processing in the cerebellum. Furthermore, neurophysiologic studies suggest an asymmetry, such that the cerebellum may encode errors arising from unexpected sensory events, but not errors reflecting the omission of expected stimuli. We conducted an imaging study to compare the cerebellar response to these two types of errors. Participants made fast out-and-back reaching movements, aiming either for an object that delivered a force pulse if intersected or for a gap between two objects, either of which delivered a force pulse if intersected. Errors (missing the target) could therefore be signaled either through the presence or absence of a force pulse. In an initial analysis, the cerebellar BOLD response was smaller on trials with errors compared with trials without errors. However, we also observed an error-related decrease in heart rate. After correcting for variation in heart rate, increased activation during error trials was observed in the hand area of lobules V and VI. This effect was similar for the two error types. The results provide evidence for the encoding of errors resulting from either the unexpected presence or unexpected absence of sensory stimulation in the human cerebellum. PMID:22492047

  4. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    SciTech Connect

    Gustafson, William I.; Yu, Shaocai

    2012-10-23

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.

  5. Differential neural mechanisms for early and late prediction error detection.

    PubMed

    Malekshahi, Rahim; Seth, Anil; Papanikolaou, Amalia; Mathews, Zenon; Birbaumer, Niels; Verschure, Paul F M J; Caria, Andrea

    2016-01-01

    Emerging evidence indicates that prediction, instantiated at different perceptual levels, facilitate visual processing and enable prompt and appropriate reactions. Until now, the mechanisms underlying the effect of predictive coding at different stages of visual processing have still remained unclear. Here, we aimed to investigate early and late processing of spatial prediction violation by performing combined recordings of saccadic eye movements and fast event-related fMRI during a continuous visual detection task. Psychophysical reverse correlation analysis revealed that the degree of mismatch between current perceptual input and prior expectations is mainly processed at late rather than early stage, which is instead responsible for fast but general prediction error detection. Furthermore, our results suggest that conscious late detection of deviant stimuli is elicited by the assessment of prediction error's extent more than by prediction error per se. Functional MRI and functional connectivity data analyses indicated that higher-level brain systems interactions modulate conscious detection of prediction error through top-down processes for the analysis of its representational content, and possibly regulate subsequent adaptation of predictive models. Overall, our experimental paradigm allowed to dissect explicit from implicit behavioral and neural responses to deviant stimuli in terms of their reliance on predictive models. PMID:27079423

  6. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors

    PubMed Central

    Murphy, Peter R.; van Moort, Marianne L.; Nieuwenhuis, Sander

    2016-01-01

    Reaction time (RT) is commonly observed to slow down after an error. This post-error slowing (PES) has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES. PMID:27010472

  7. Curiosity and reward: Valence predicts choice and information prediction errors enhance learning.

    PubMed

    Marvin, Caroline B; Shohamy, Daphna

    2016-03-01

    Curiosity drives many of our daily pursuits and interactions; yet, we know surprisingly little about how it works. Here, we harness an idea implied in many conceptualizations of curiosity: that information has value in and of itself. Reframing curiosity as the motivation to obtain reward-where the reward is information-allows one to leverage major advances in theoretical and computational mechanisms of reward-motivated learning. We provide new evidence supporting 2 predictions that emerge from this framework. First, we find an asymmetric effect of positive versus negative information, with positive information enhancing both curiosity and long-term memory for information. Second, we find that it is not the absolute value of information that drives learning but, rather, the gap between the reward expected and reward received, an "information prediction error." These results support the idea that information functions as a reward, much like money or food, guiding choices and driving learning in systematic ways. (PsycINFO Database Record PMID:26783880

  8. A Dual Role for Prediction Error in Associative Learning

    PubMed Central

    Friston, Karl J.; Daw, Nathaniel D.; McIntosh, Anthony R.; Stephan, Klaas E.

    2009-01-01

    Confronted with a rich sensory environment, the brain must learn statistical regularities across sensory domains to construct causal models of the world. Here, we used functional magnetic resonance imaging and dynamic causal modeling (DCM) to furnish neurophysiological evidence that statistical associations are learnt, even when task-irrelevant. Subjects performed an audio-visual target-detection task while being exposed to distractor stimuli. Unknown to them, auditory distractors predicted the presence or absence of subsequent visual distractors. We modeled incidental learning of these associations using a Rescorla–Wagner (RW) model. Activity in primary visual cortex and putamen reflected learning-dependent surprise: these areas responded progressively more to unpredicted, and progressively less to predicted visual stimuli. Critically, this prediction-error response was observed even when the absence of a visual stimulus was surprising. We investigated the underlying mechanism by embedding the RW model into a DCM to show that auditory to visual connectivity changed significantly over time as a function of prediction error. Thus, consistent with predictive coding models of perception, associative learning is mediated by prediction-error dependent changes in connectivity. These results posit a dual role for prediction-error in encoding surprise and driving associative plasticity. PMID:18820290

  9. Differential neural mechanisms for early and late prediction error detection

    PubMed Central

    Malekshahi, Rahim; Seth, Anil; Papanikolaou, Amalia; Mathews, Zenon; Birbaumer, Niels; Verschure, Paul F. M. J.; Caria, Andrea

    2016-01-01

    Emerging evidence indicates that prediction, instantiated at different perceptual levels, facilitate visual processing and enable prompt and appropriate reactions. Until now, the mechanisms underlying the effect of predictive coding at different stages of visual processing have still remained unclear. Here, we aimed to investigate early and late processing of spatial prediction violation by performing combined recordings of saccadic eye movements and fast event-related fMRI during a continuous visual detection task. Psychophysical reverse correlation analysis revealed that the degree of mismatch between current perceptual input and prior expectations is mainly processed at late rather than early stage, which is instead responsible for fast but general prediction error detection. Furthermore, our results suggest that conscious late detection of deviant stimuli is elicited by the assessment of prediction error’s extent more than by prediction error per se. Functional MRI and functional connectivity data analyses indicated that higher-level brain systems interactions modulate conscious detection of prediction error through top-down processes for the analysis of its representational content, and possibly regulate subsequent adaptation of predictive models. Overall, our experimental paradigm allowed to dissect explicit from implicit behavioral and neural responses to deviant stimuli in terms of their reliance on predictive models. PMID:27079423

  10. Phase errors and predicted spectral performance of a prototype undulator

    SciTech Connect

    Dejus, R.J.; Vassrman, I.; Moog, E.R.; Gluskin, E.

    1994-08-01

    A prototype undulator has been used to study different magnetic end-configurations and shimming techniques for straightening the beam trajectory. Field distributions obtained by Hall probe measurements were analyzed in terms of trajectory, phase errors, and on-axis brightness for the purpose of correlating predicted spectral intensity with the calculated phase errors. Two device configurations were analyzed. One configuration had a full-strength first magnet at each end and the next-to-last pole was recessed to make the trajectory through the middle of the undulator parallel to the undulator axis. For the second configuration, the first permanent magnet at each end was replaced by a half-strength magnet to reduce the trajectory displacement and the next-to-last pole was adjusted appropriately, and shims were added to straighten the trajectory. Random magnetic field errors can cause trajectory deviations that will affect the optimum angle for viewing the emitted radiation, and care must be taken to select the appropriate angle when calculating the phase errors. This angle may be calculated from the average trajectory angle evaluated at the location of the poles. For the second configuration, we find an rms phase error of less than 3{degrees} and predict 87% of the ideal value of the on-axis brightness for the third harmonic. We have also analyzed the gap dependence of the phase errors and spectral brightness and have found that the rms phase error remain small at all gap settings.

  11. Using absolute and relative reasoning in the prediction of the potential metabolism of xenobiotics.

    PubMed

    Button, William G; Judson, Philip N; Long, Anthony; Vessey, Jonathan D

    2003-01-01

    To be useful, a system which predicts the metabolic fate of a chemical should predict the more likely metabolites rather than every possibility. Reasoning can be used to prioritize biotransformations, but a real biochemical domain is complex and cannot be fully defined in terms of the likelihood of events. This paper describes the combined use of two models for reasoning under uncertainty in a working system, METEOR-one model deals with absolute reasoning and the second with relative reasoning. PMID:14502469

  12. Arithmetic and local circuitry underlying dopamine prediction errors

    PubMed Central

    Eshel, Neir; Bukwich, Michael; Rao, Vinod; Hemmelder, Vivian; Tian, Ju; Uchida, Naoshige

    2015-01-01

    Dopamine neurons are thought to facilitate learning by comparing actual and expected reward1,2. Despite two decades of investigation, little is known about how this comparison is made. To determine how dopamine neurons calculate prediction error, we combined optogenetic manipulations with extracellular recordings in the ventral tegmental area (VTA) while mice engaged in classical conditioning. By manipulating the temporal expectation of reward, we demonstrate that dopamine neurons perform subtraction, a computation that is ideal for reinforcement learning but rarely observed in the brain. Furthermore, selectively exciting and inhibiting neighbouring GABA neurons in the VTA reveals that these neurons are a source of subtraction: they inhibit dopamine neurons when reward is expected, causally contributing to prediction error calculations. Finally, bilaterally stimulating VTA GABA neurons dramatically reduces anticipatory licking to conditioned odours, consistent with an important role for these neurons in reinforcement learning. Together, our results uncover the arithmetic and local circuitry underlying dopamine prediction errors. PMID:26322583

  13. Neural correlates of error prediction in a complex motor task

    PubMed Central

    Maurer, Lisa Katharina; Maurer, Heiko; Müller, Hermann

    2015-01-01

    The goal of the study was to quantify error prediction processes via neural correlates in the Electroencephalogram (EEG). Access to such a neural signal will allow to gain insights into functional and temporal aspects of error perception in the course of learning. We focused on the error negativity (Ne) or error-related negativity (ERN) as a candidate index for the prediction processes. We have used a virtual goal-oriented throwing task where participants used a lever to throw a virtual ball displayed on a computer monitor with the goal of hitting a virtual target as often as possible. After one day of practice with 400 trials, participants performed another 400 trials on a second day with EEG measurement. After error trials (i.e., when the ball missed the target), we found a sharp negative deflection in the EEG peaking 250 ms after ball release (mean amplitude: t = −2.5, df = 20, p = 0.02) and another broader negative deflection following the first, reaching from about 300 ms after release until unambiguous visual knowledge of results (KR; hitting or passing by the target; mean amplitude: t = −7.5, df = 20, p < 0.001). According to shape and timing of the two deflections, we assume that the first deflection represents a predictive Ne/ERN (prediction based on efferent commands and proprioceptive feedback) while the second deflection might have arisen from action monitoring. PMID:26300754

  14. Prediction of absolute infrared intensities for the fundamental vibrations of H2O2

    NASA Technical Reports Server (NTRS)

    Rogers, J. D.; Hillman, J. J.

    1981-01-01

    Absolute infrared intensities are predicted for the vibrational bands of gas-phase H2O2 by the use of a hydrogen atomic polar tensor transferred from the hydroxyl hydrogen atom of CH3OH. These predicted intensities are compared with intensities predicted by the use of a hydrogen atomic polar tensor transferred from H2O. The predicted relative intensities agree well with published spectra of gas-phase H2O2, and the predicted absolute intensities are expected to be accurate to within at least a factor of two. Among the vibrational degrees of freedom, the antisymmetric O-H bending mode nu(6) is found to be the strongest with a calculated intensity of 60.5 km/mole. The torsional band, a consequence of hindered rotation, is found to be the most intense fundamental with a predicted intensity of 120 km/mole. These results are compared with the recent absolute intensity determinations for the nu(6) band.

  15. Error prediction for probes guided by means of fixtures

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, J. Michael

    2012-02-01

    Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.

  16. Principal components analysis of reward prediction errors in a reinforcement learning task.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2016-01-01

    Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found. PMID:26196667

  17. Small error dynamics and the predictability of atmospheric flows

    NASA Technical Reports Server (NTRS)

    Farrell, Brian F.

    1990-01-01

    In this paper, linear small-error theory is applied to the study of weather predictability. A simple baroclinic shear model and a barotropic channel model with a localized jet are used as examples. It is shown that increase in error on synoptic forecast time scales is controlled by rapidly growing perturbations that are not of normal mode form. Unpredictable regimes are not necessarily associated with larger exponential growth rates than are relatively more predictable regimes. Model problems illustrating baroclinic and barotropic dynamics suggest that asymptotic measures of divergence in phase space, while applicable in the limit of infinite time, may not be appropriate over time intervals addressed by present synoptic forecast.

  18. A Causal Link Between Prediction Errors, Dopamine Neurons and Learning

    PubMed Central

    Steinberg, Elizabeth E.; Keiflin, Ronald; Boivin, Josiah R.; Witten, Ilana B.; Deisseroth, Karl; Janak, Patricia H.

    2013-01-01

    Situations where rewards are unexpectedly obtained or withheld represent opportunities for new learning. Often, this learning includes identifying cues that predict reward availability. Unexpected rewards strongly activate midbrain dopamine neurons. This phasic signal is proposed to support learning about antecedent cues by signaling discrepancies between actual and expected outcomes, termed a reward prediction error. However, it is unknown whether dopamine neuron prediction error signaling and cue-reward learning are causally linked. To test this hypothesis, we manipulated dopamine neuron activity in rats in two behavioral procedures, associative blocking and extinction, that illustrate the essential function of prediction errors in learning. We observed that optogenetic activation of dopamine neurons concurrent with reward delivery, mimicking a prediction error, was sufficient to cause long-lasting increases in cue-elicited reward-seeking behavior. Our findings establish a causal role for temporally-precise dopamine neuron signaling in cue-reward learning, bridging a critical gap between experimental evidence and influential theoretical frameworks. PMID:23708143

  19. A causal link between prediction errors, dopamine neurons and learning.

    PubMed

    Steinberg, Elizabeth E; Keiflin, Ronald; Boivin, Josiah R; Witten, Ilana B; Deisseroth, Karl; Janak, Patricia H

    2013-07-01

    Situations in which rewards are unexpectedly obtained or withheld represent opportunities for new learning. Often, this learning includes identifying cues that predict reward availability. Unexpected rewards strongly activate midbrain dopamine neurons. This phasic signal is proposed to support learning about antecedent cues by signaling discrepancies between actual and expected outcomes, termed a reward prediction error. However, it is unknown whether dopamine neuron prediction error signaling and cue-reward learning are causally linked. To test this hypothesis, we manipulated dopamine neuron activity in rats in two behavioral procedures, associative blocking and extinction, that illustrate the essential function of prediction errors in learning. We observed that optogenetic activation of dopamine neurons concurrent with reward delivery, mimicking a prediction error, was sufficient to cause long-lasting increases in cue-elicited reward-seeking behavior. Our findings establish a causal role for temporally precise dopamine neuron signaling in cue-reward learning, bridging a critical gap between experimental evidence and influential theoretical frameworks. PMID:23708143

  20. IR signature prediction errors for skin-heated aerial targets

    NASA Astrophysics Data System (ADS)

    McGlynn, John D.; Auerbach, Steven P.

    1997-06-01

    The infrared signature of an aircraft is generally calculated as the sum of multiple components. These components are, typically: the aerodynamic skin heating, reflected solar and upwelling and downwelling radiation, engine hot parts, and exhaust gas emissions. For most airframes, the latter two components overwhelmingly dominate the IR signature. However, for small targets--such as small fighters and cruise missiles, particularly targets with masked hot parts, emissivity control, and suppressed plumes- -aerodynamic heating is the dominant term. This term is determined by the speed of the target, the sea-level air temperature, and the adiabatic lapse rate of the atmosphere, as a function of altitude. Simulations which use AFGL atmospheric codes (LOWTRAN and MODTRAN)--such as SPIRITS--to predict skin heating, may have an intrinsic error in the predicted skin heating component, due to the fixed number of discrete sea-level air temperatures implicit in the atmospheric models. Whenever the assumed background temperature deviates from the implicit model atmosphere sea- level temperature, there will be a measurable error. This error becomes significant in magnitude when trying to model the signatures of small, dim targets dominated by skin heating. This study quantifies the predicted signature errors and suggests simulation implementations which can minimize these errors.

  1. How Prediction Errors Shape Perception, Attention, and Motivation

    PubMed Central

    den Ouden, Hanneke E. M.; Kok, Peter; de Lange, Floris P.

    2012-01-01

    Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise. PMID:23248610

  2. Periodicity characterization of orbital prediction error and Poisson series fitting

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Zong; Chen, Lei; Tang, Guo-Jin

    2012-09-01

    Publicly available Two-Line Element Sets (TLE) contains no associated error or accuracy information. The historical-data-based method is a feasible choice for those objects only TLE data are available. Most of current TLE error analysis methods use polynomial fitting which cannot represent the periodic characteristics. This paper has presented a methodology for periodicity characterization and Poisson series fitting for orbital prediction error based on historical orbital data. As error-fitting function, the Poisson series can describe variation of error with respect to propagation duration and on-orbit position of objects. The Poisson coefficient matrices of each error components are fitted using least squares method. Effects of polynomial terms, trigonometric terms, and mixed terms of Poisson series are discussed. Substituting time difference and mean anomaly into the Poisson series one can obtain the error information at specific time. Four satellites (Cosmos-2251, GPS-62, SLOSHSAT, TelStar-10) from four orbital type (LEO, MEO, HEO, GEO, respectively) were selected as examples to demonstrate and validate the method. The results indicated that the periodic characteristics exist in all three components of four objects, especially HEO and MEO. The periodicity characterization and Poisson series fitting could improve accuracy of the orbit covariance information. The Poisson series is a common form for describing orbital prediction error, the commonly used polynomial fitting is a special case of the Poisson series fitting. The Poisson coefficient matrices can be obtained before close approach analysis. This method does not require any knowledge about how the state vectors are generated, so it can handle not only TLE data but also other orbit models and elements.

  3. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  4. Predicting AIDS-related events using CD4 percentage or CD4 absolute counts

    PubMed Central

    Pirzada, Yasmin; Khuder, Sadik; Donabedian, Haig

    2006-01-01

    Background The extent of immunosuppression and the probability of developing an AIDS-related complication in HIV-infected people is usually measured by the absolute number of CD4 positive T-cells. The percentage of CD4 positive cells is a more easily measured and less variable number. We analyzed sequential CD4 and CD8 numbers, percentages and ratios in 218 of our HIV infected patients to determine the most reliable predictor of an AIDS-related event. Results The CD4 percentage was an unsurpassed predictor of the occurrence of AIDS-related events when all subsets of patients are considered. The CD4 absolute count was the next most reliable, followed by the ratio of CD4/CD8 percentages. The value of CD4 percentage over the CD4 absolute count was seen even after the introduction of highly effective HIV therapy. Conclusion The CD4 percentage is unsurpassed as a parameter for predicting the onset of HIV-related diseases. The extra time and expense of measuring the CD4 absolute count may be unnecessary. PMID:16916461

  5. Dopamine neurons encode errors in predicting movement trigger occurrence.

    PubMed

    Pasquereau, Benjamin; Turner, Robert S

    2015-02-15

    The capacity to anticipate the timing of events in a dynamic environment allows us to optimize the processes necessary for perceiving, attending to, and responding to them. Such anticipation requires neuronal mechanisms that track the passage of time and use this representation, combined with prior experience, to estimate the likelihood that an event will occur (i.e., the event's "hazard rate"). Although hazard-like ramps in activity have been observed in several cortical areas in preparation for movement, it remains unclear how such time-dependent probabilities are estimated to optimize response performance. We studied the spiking activity of dopamine neurons in the substantia nigra pars compacta of monkeys during an arm-reaching task for which the foreperiod preceding the "go" signal varied randomly along a uniform distribution. After extended training, the monkeys' reaction times correlated inversely with foreperiod duration, reflecting a progressive anticipation of the go signal according to its hazard rate. Many dopamine neurons modulated their firing rates as predicted by a succession of hazard-related prediction errors. First, as time passed during the foreperiod, slowly decreasing anticipatory activity tracked the elapsed time as if encoding negative prediction errors. Then, when the go signal appeared, a phasic response encoded the temporal unpredictability of the event, consistent with a positive prediction error. Neither the anticipatory nor the phasic signals were affected by the anticipated magnitudes of future reward or effort, or by parameters of the subsequent movement. These results are consistent with the notion that dopamine neurons encode hazard-related prediction errors independently of other information. PMID:25411459

  6. The energetics of error-growth and the predictability analysis in precipitation prediction

    NASA Astrophysics Data System (ADS)

    Luo, Yu; Zhang, Lifeng; Zhang, Yun

    2012-02-01

    Sensitivity simulations are conducted in AREM (Advanced Regional Eta-Coordinate numerical heavy-rain prediction Model) for a torrential precipitation in June 2008 along South China to investigate the effect of initial uncertainty on precipitation predictability. It is found that the strong initial-condition sensitivity for precipitation prediction can be attributed to the upscale evolution of error growth. However, different modality of error growth can be observed in lower and upper layers. Compared with lower-level, significant error growth in the upper-layer appears over both convective area and high jet stream. It thus indicates that the error growth depends on both moist convection due to convective instability and the wind shear associated with dynamic instability. As heavy rainfall process can be described as a series of energy conversion, it reveals that the advection-term and latent heating serve as significant energy sources. Moreover, the dominant source terms of error-energy growth are nonlinearity advection (ADVT) and difference in latent heating (DLHT), with the latter being largely responsible for the rapid error growth in the initial stage. In this sense, the occurrence of precipitation and error-growth share the energy source, which implies the inherent predictability of heavy rainfall. In addition, a decomposition of ADVT further indicates that the flow-dependent error growth is closely related to the atmospheric instability. Thus the system growing from unstable flow regime has its intrinsic predictability.

  7. Error-related negativity predicts reinforcement learning and conflict biases.

    PubMed

    Frank, Michael J; Woroch, Brion S; Curran, Tim

    2005-08-18

    The error-related negativity (ERN) is an electrophysiological marker thought to reflect changes in dopamine when participants make errors in cognitive tasks. Our computational model further predicts that larger ERNs should be associated with better learning to avoid maladaptive responses. Here we show that participants who avoided negative events had larger ERNs than those who were biased to learn more from positive outcomes. We also tested for effects of response conflict on ERN magnitude. While there was no overall effect of conflict, positive learners had larger ERNs when having to choose among two good options (win/win decisions) compared with two bad options (lose/lose decisions), whereas negative learners exhibited the opposite pattern. These results demonstrate that the ERN predicts the degree to which participants are biased to learn more from their mistakes than their correct choices and clarify the extent to which it indexes decision conflict. PMID:16102533

  8. Multiscale Reactive Molecular Dynamics for Absolute pK a Predictions and Amino Acid Deprotonation.

    PubMed

    Nelson, J Gard; Peng, Yuxing; Silverstein, Daniel W; Swanson, Jessica M J

    2014-07-01

    Accurately calculating a weak acid's pK a from simulations remains a challenging task. We report a multiscale theoretical approach to calculate the free energy profile for acid ionization, resulting in accurate absolute pK a values in addition to insights into the underlying mechanism. Importantly, our approach minimizes empiricism by mapping electronic structure data (QM/MM forces) into a reactive molecular dynamics model capable of extensive sampling. Consequently, the bulk property of interest (the absolute pK a) is the natural consequence of the model, not a parameter used to fit it. This approach is applied to create reactive models of aspartic and glutamic acids. We show that these models predict the correct pK a values and provide ample statistics to probe the molecular mechanism of dissociation. This analysis shows changes in the solvation structure and Zundel-dominated transitions between the protonated acid, contact ion pair, and bulk solvated excess proton. PMID:25061442

  9. The Representation of Prediction Error in Auditory Cortex.

    PubMed

    Rubin, Jonathan; Ulanovsky, Nachum; Nelken, Israel; Tishby, Naftali

    2016-08-01

    To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of 'oddball' sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251

  10. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  11. The Representation of Prediction Error in Auditory Cortex

    PubMed Central

    Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali

    2016-01-01

    To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251

  12. Impaired Neural Response to Negative Prediction Errors in Cocaine Addiction

    PubMed Central

    Parvaz, Muhammad A.; Konova, Anna B.; Proudfit, Greg H.; Dunning, Jonathan P.; Malaker, Pias; Moeller, Scott J.; Maloney, Tom; Alia-Klein, Nelly

    2015-01-01

    Learning can be guided by unexpected success or failure, signaled via dopaminergic positive reward prediction error (+RPE) and negative reward-prediction error (−RPE) signals, respectively. Despite conflicting empirical evidence, RPE signaling is thought to be impaired in drug addiction. To resolve this outstanding question, we studied as a measure of RPE the feedback negativity (FN) that is sensitive to both reward and the violation of expectation. We examined FN in 25 healthy controls; 25 individuals with cocaine-use disorder (CUD) who tested positive for cocaine on the study day (CUD+), indicating cocaine use within the past 72 h; and in 25 individuals with CUD who tested negative for cocaine (CUD−). EEG was acquired while the participants performed a gambling task predicting whether they would win or lose money on each trial given three known win probabilities (25, 50, or 75%). FN was scored for the period in each trial when the actual outcome (win or loss) was revealed. A significant interaction between prediction, outcome, and group revealed that controls showed increased FN to unpredicted compared with predicted wins (i.e., intact +RPE) and decreased FN to unpredicted compared with predicted losses (i.e., intact −RPE). However, neither CUD subgroup showed FN modulation to loss (i.e., impaired −RPE), and unlike CUD+ individuals, CUD− individuals also did not show FN modulation to win (i.e., impaired +RPE). Thus, using FN, the current study directly documents −RPE deficits in CUD individuals. The mechanisms underlying −RPE signaling impairments in addiction may contribute to the disadvantageous nature of excessive drug use, which can persist despite repeated unfavorable life experiences (e.g., frequent incarcerations). PMID:25653348

  13. Impaired neural response to negative prediction errors in cocaine addiction.

    PubMed

    Parvaz, Muhammad A; Konova, Anna B; Proudfit, Greg H; Dunning, Jonathan P; Malaker, Pias; Moeller, Scott J; Maloney, Tom; Alia-Klein, Nelly; Goldstein, Rita Z

    2015-02-01

    Learning can be guided by unexpected success or failure, signaled via dopaminergic positive reward prediction error (+RPE) and negative reward-prediction error (-RPE) signals, respectively. Despite conflicting empirical evidence, RPE signaling is thought to be impaired in drug addiction. To resolve this outstanding question, we studied as a measure of RPE the feedback negativity (FN) that is sensitive to both reward and the violation of expectation. We examined FN in 25 healthy controls; 25 individuals with cocaine-use disorder (CUD) who tested positive for cocaine on the study day (CUD+), indicating cocaine use within the past 72 h; and in 25 individuals with CUD who tested negative for cocaine (CUD-). EEG was acquired while the participants performed a gambling task predicting whether they would win or lose money on each trial given three known win probabilities (25, 50, or 75%). FN was scored for the period in each trial when the actual outcome (win or loss) was revealed. A significant interaction between prediction, outcome, and group revealed that controls showed increased FN to unpredicted compared with predicted wins (i.e., intact +RPE) and decreased FN to unpredicted compared with predicted losses (i.e., intact -RPE). However, neither CUD subgroup showed FN modulation to loss (i.e., impaired -RPE), and unlike CUD+ individuals, CUD- individuals also did not show FN modulation to win (i.e., impaired +RPE). Thus, using FN, the current study directly documents -RPE deficits in CUD individuals. The mechanisms underlying -RPE signaling impairments in addiction may contribute to the disadvantageous nature of excessive drug use, which can persist despite repeated unfavorable life experiences (e.g., frequent incarcerations). PMID:25653348

  14. No Absolutism Here: Harm Predicts Moral Judgment 30× Better Than Disgust-Commentary on Scott, Inbar, & Rozin (2016).

    PubMed

    Gray, Kurt; Schein, Chelsea

    2016-05-01

    Moral absolutism is the idea that people's moral judgments are insensitive to considerations of harm. Scott, Inbar, and Rozin (2016, this issue) claim that most moral opponents to genetically modified organisms are absolutely opposed-motivated by disgust and not harm. Yet there is no evidence for moral absolutism in their data. Perceived risk/harm is the most significant predictor of moral judgments for "absolutists," accounting for 30 times more variance than disgust. Reanalyses suggest that disgust is not even a significant predictor of the moral judgments of absolutists once accounting for perceived harm and anger. Instead of revealing actual moral absolutism, Scott et al. find only empty absolutism: hypothetical, forecasted, self-reported moral absolutism. Strikingly, the moral judgments of so-called absolutists are somewhat more sensitive to consequentialist concerns than those of nonabsolutists. Mediation reanalyses reveal that moral judgments are most proximally predicted by harm and not disgust, consistent with dyadic morality. PMID:27217244

  15. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  16. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  17. Time-series modeling and prediction of global monthly absolute temperature for environmental decision making

    NASA Astrophysics Data System (ADS)

    Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun

    2013-03-01

    A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.

  18. QUANTIFIERS UNDONE: REVERSING PREDICTABLE SPEECH ERRORS IN COMPREHENSION

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2015-01-01

    Speakers predictably make errors during spontaneous speech. Listeners may identify such errors and repair the input, or their analysis of the input, accordingly. Two written questionnaire studies investigated error compensation mechanisms in sentences with doubled quantifiers such as Many students often turn in their assignments late. Results show a considerable number of undoubled interpretations for all items tested (though fewer for sentences containing doubled negation than for sentences containing many-often, every-always or few-seldom.) This evidence shows that the compositional form-meaning pairing supplied by the grammar is not the only systematic mapping between form and meaning. Implicit knowledge of the workings of the performance systems provides an additional mechanism for pairing sentence form and meaning. Alternate accounts of the data based on either a concord interpretation or an emphatic interpretation of the doubled quantifier don’t explain why listeners fail to apprehend the ‘extra meaning’ added by the potentially redundant material only in limited circumstances. PMID:26478637

  19. Deep and beautiful. The reward prediction error hypothesis of dopamine.

    PubMed

    Colombo, Matteo

    2014-03-01

    According to the reward-prediction error hypothesis (RPEH) of dopamine, the phasic activity of dopaminergic neurons in the midbrain signals a discrepancy between the predicted and currently experienced reward of a particular event. It can be claimed that this hypothesis is deep, elegant and beautiful, representing one of the largest successes of computational neuroscience. This paper examines this claim, making two contributions to existing literature. First, it draws a comprehensive historical account of the main steps that led to the formulation and subsequent success of the RPEH. Second, in light of this historical account, it explains in which sense the RPEH is explanatory and under which conditions it can be justifiably deemed deeper than the incentive salience hypothesis of dopamine, which is arguably the most prominent contemporary alternative to the RPEH. PMID:24252364

  20. Dopamine restores reward prediction errors in old age

    PubMed Central

    Chowdhury, Rumana; Guitart-Masip, Marc; Lambert, Christian; Dayan, Peter; Huys, Quentin; Düzel, Emrah; Dolan, Raymond J

    2013-01-01

    Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. In a human functional magnetic resonance imaging (fMRI) study, we show that healthy older adults have an abnormal signature of expected value resulting in an incomplete reward prediction error signal in the nucleus accumbens, a brain region receiving rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum measured with diffusion tensor imaging (DTI) was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to a level shown by young adults. Critically this drug-effect was linked to restoration of a canonical neural reward prediction error. Thus we identify a neurochemical signature underlying abnormal reward processing in older adults and show this can be modulated by L-DOPA. PMID:23525044

  1. Chasing probabilities - Signaling negative and positive prediction errors across domains.

    PubMed

    Meder, David; Madsen, Kristoffer H; Hulme, Oliver; Siebner, Hartwig R

    2016-07-01

    Adaptive actions build on internal probabilistic models of possible outcomes that are tuned according to the errors of their predictions when experiencing an actual outcome. Prediction errors (PEs) inform choice behavior across a diversity of outcome domains and dimensions, yet neuroimaging studies have so far only investigated such signals in singular experimental contexts. It is thus unclear whether the neuroanatomical distribution of PE encoding reported previously pertains to computational features that are invariant with respect to outcome valence, sensory domain, or some combination of the two. We acquired functional MRI data while volunteers performed four probabilistic reversal learning tasks which differed in terms of outcome valence (reward-seeking versus punishment-avoidance) and domain (abstract symbols versus facial expressions) of outcomes. We found that ventral striatum and frontopolar cortex coded increasingly positive PEs, whereas dorsal anterior cingulate cortex (dACC) traced increasingly negative PEs, irrespectively of the outcome dimension. Individual reversal behavior was unaffected by context manipulations and was predicted by activity in dACC and right inferior frontal gyrus (IFG). The stronger the response to negative PEs in these areas, the lower was the tendency to reverse choice behavior in response to negative events, suggesting that these regions enforce a rule-based strategy across outcome dimensions. Outcome valence influenced PE-related activity in left amygdala, IFG, and dorsomedial prefrontal cortex, where activity selectively scaled with increasingly positive PEs in the reward-seeking but not punishment-avoidance context, irrespective of sensory domain. Left amygdala displayed an additional influence of sensory domain. In the context of avoiding punishment, amygdala activity increased with increasingly negative PEs, but only for facial stimuli, indicating an integration of outcome valence and sensory domain during probabilistic

  2. Error estimation for CFD aeroheating prediction under rarefied flow condition

    NASA Astrophysics Data System (ADS)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.

  3. Representation of aversive prediction errors in the human periaqueductal gray

    PubMed Central

    Roy, Mathieu; Shohamy, Daphna; Daw, Nathaniel; Jepma, Marieke; Wimmer, Elliott; Wager, Tor D.

    2014-01-01

    Pain is a primary driver of learning and motivated action. It is also a target of learning, as nociceptive brain responses are shaped by learning processes. We combined an instrumental pain avoidance task with an axiomatic approach to assessing fMRI signals related to prediction errors (PEs), which drive reinforcement-based learning. We found that pain PEs were encoded in the periaqueductal gray (PAG), an important structure for pain control and learning in animal models. Axiomatic tests combined with dynamic causal modeling suggested that ventromedial prefrontal cortex, supported by putamen, provides an expected value-related input to the PAG, which then conveys PE signals to prefrontal regions important for behavioral regulation, including orbitofrontal, anterior mid-cingulate, and dorsomedial prefrontal cortices. Thus, pain-related learning involves distinct neural circuitry, with implications for behavior and pain dynamics. PMID:25282614

  4. Representation of aversive prediction errors in the human periaqueductal gray.

    PubMed

    Roy, Mathieu; Shohamy, Daphna; Daw, Nathaniel; Jepma, Marieke; Wimmer, G Elliott; Wager, Tor D

    2014-11-01

    Pain is a primary driver of learning and motivated action. It is also a target of learning, as nociceptive brain responses are shaped by learning processes. We combined an instrumental pain avoidance task with an axiomatic approach to assessing fMRI signals related to prediction errors (PEs), which drive reinforcement-based learning. We found that pain PEs were encoded in the periaqueductal gray (PAG), a structure important for pain control and learning in animal models. Axiomatic tests combined with dynamic causal modeling suggested that ventromedial prefrontal cortex, supported by putamen, provides an expected value-related input to the PAG, which then conveys PE signals to prefrontal regions important for behavioral regulation, including orbitofrontal, anterior mid-cingulate and dorsomedial prefrontal cortices. Thus, pain-related learning involves distinct neural circuitry, with implications for behavior and pain dynamics. PMID:25282614

  5. Temporal prediction errors modulate task-switching performance.

    PubMed

    Limongi, Roberto; Silva, Angélica M; Góngora-Costa, Begoña

    2015-01-01

    We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus' onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as "executive control" is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching. PMID:26379568

  6. Temporal prediction errors modulate task-switching performance

    PubMed Central

    Limongi, Roberto; Silva, Angélica M.; Góngora-Costa, Begoña

    2015-01-01

    We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus’ onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as “executive control” is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching. PMID:26379568

  7. External Validation of the Garvan Nomograms for Predicting Absolute Fracture Risk: The Tromsø Study

    PubMed Central

    Ahmed, Luai A.; Nguyen, Nguyen D.; Bjørnerem, Åshild; Joakimsen, Ragnar M.; Jørgensen, Lone; Størmer, Jan; Bliuc, Dana; Center, Jacqueline R.; Eisman, John A.; Nguyen, Tuan V.; Emaus, Nina

    2014-01-01

    Background Absolute risk estimation is a preferred approach for assessing fracture risk and treatment decision making. This study aimed to evaluate and validate the predictive performance of the Garvan Fracture Risk Calculator in a Norwegian cohort. Methods The analysis included 1637 women and 1355 aged 60+ years from the Tromsø study. All incident fragility fractures between 2001 and 2009 were registered. The predicted probabilities of non-vertebral osteoporotic and hip fractures were determined using models with and without BMD. The discrimination and calibration of the models were assessed. Reclassification analysis was used to compare the models performance. Results The incidence of osteoporotic and hip fracture was 31.5 and 8.6 per 1000 population in women, respectively; in men the corresponding incidence was 12.2 and 5.1. The predicted 5-year and 10-year probability of fractures was consistently higher in the fracture group than the non-fracture group for all models. The 10-year predicted probabilities of hip fracture in those with fracture was 2.8 (women) to 3.1 times (men) higher than those without fracture. There was a close agreement between predicted and observed risk in both sexes and up to the fifth quintile. Among those in the highest quintile of risk, the models over-estimated the risk of fracture. Models with BMD performed better than models with body weight in correct classification of risk in individuals with and without fracture. The overall net decrease in reclassification of the model with weight compared to the model with BMD was 10.6% (p = 0.008) in women and 17.2% (p = 0.001) in men for osteoporotic fractures, and 13.3% (p = 0.07) in women and 17.5% (p = 0.09) in men for hip fracture. Conclusions The Garvan Fracture Risk Calculator is valid and clinically useful in identifying individuals at high risk of fracture. The models with BMD performed better than those with body weight in fracture risk prediction. PMID:25255221

  8. Perceptual learning of degraded speech by minimizing prediction error.

    PubMed

    Sohoglu, Ediz; Davis, Matthew H

    2016-03-22

    Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech. PMID:26957596

  9. Mini-review: Prediction errors, attention and associative learning.

    PubMed

    Holland, Peter C; Schiffino, Felipe L

    2016-05-01

    Most modern theories of associative learning emphasize a critical role for prediction error (PE, the difference between received and expected events). One class of theories, exemplified by the Rescorla-Wagner (1972) model, asserts that PE determines the effectiveness of the reinforcer or unconditioned stimulus (US): surprising reinforcers are more effective than expected ones. A second class, represented by the Pearce-Hall (1980) model, argues that PE determines the associability of conditioned stimuli (CSs), the rate at which they may enter into new learning: the surprising delivery or omission of a reinforcer enhances subsequent processing of the CSs that were present when PE was induced. In this mini-review we describe evidence, mostly from our laboratory, for PE-induced changes in the associability of both CSs and USs, and the brain systems involved in the coding, storage and retrieval of these altered associability values. This evidence favors a number of modifications to behavioral models of how PE influences event processing, and suggests the involvement of widespread brain systems in animals' responses to PE. PMID:26948122

  10. Seasonal prediction of Indian summer monsoon rainfall in NCEP CFSv2: forecast and predictability error

    NASA Astrophysics Data System (ADS)

    Pokhrel, Samir; Saha, Subodh Kumar; Dhakate, Ashish; Rahman, Hasibur; Chaudhari, Hemantkumar S.; Salunke, Kiran; Hazra, Anupam; Sujith, K.; Sikka, D. R.

    2016-04-01

    A detailed analysis of sensitivity to the initial condition for the simulation of the Indian summer monsoon using retrospective forecast by the latest version of the Climate Forecast System version-2 (CFSv2) is carried out. This study primarily focuses on the tropical region of Indian and Pacific Ocean basin, with special emphasis on the Indian land region. The simulated seasonal mean and the inter-annual standard deviations of rainfall, upper and lower level atmospheric circulations and Sea Surface Temperature (SST) tend to be more skillful as the lead forecast time decreases (5 month lead to 0 month lead time i.e. L5-L0). In general spatial correlation (bias) increases (decreases) as forecast lead time decreases. This is further substantiated by their averaged value over the selected study regions over the Indian and Pacific Ocean basins. The tendency of increase (decrease) of model bias with increasing (decreasing) forecast lead time also indicates the dynamical drift of the model. Large scale lower level circulation (850 hPa) shows enhancement of anomalous westerlies (easterlies) over the tropical region of the Indian Ocean (Western Pacific Ocean), which indicates the enhancement of model error with the decrease in lead time. At the upper level circulation (200 hPa) biases in both tropical easterly jet and subtropical westerlies jet tend to decrease as the lead time decreases. Despite enhancement of the prediction skill, mean SST bias seems to be insensitive to the initialization. All these biases are significant and together they make CFSv2 vulnerable to seasonal uncertainties in all the lead times. Overall the zeroth lead (L0) seems to have the best skill, however, in case of Indian summer monsoon rainfall (ISMR), the 3 month lead forecast time (L3) has the maximum ISMR prediction skill. This is valid using different independent datasets, wherein these maximum skill scores are 0.64, 0.42 and 0.57 with respect to the Global Precipitation Climatology Project

  11. A model for predicting individuals' absolute risk of esophageal adenocarcinoma: Moving toward tailored screening and prevention.

    PubMed

    Xie, Shao-Hua; Lagergren, Jesper

    2016-06-15

    Esophageal adenocarcinoma (EAC) is characterized by rapidly increasing incidence and poor prognosis, stressing the need for preventive and early detection strategies. We used data from a nationwide population-based case-control study, which included 189 incident cases of EAC and 820 age- and sex-matched control participants, from 1995 through 1997 in Sweden. We developed risk prediction models based on unconditional logistic regression. Candidate predictors included established and readily identifiable risk factors for EAC. The performance of model was assessed by the area under receiver operating characteristic curve (AUC) with cross-validation. The final model could explain 94% of all case patients with EAC (94% population attributable risk) and included terms for gastro-esophageal reflux symptoms or use of antireflux medication, body mass index (BMI), tobacco smoking, duration of living with a partner, previous diagnoses of esophagitis and diaphragmatic hernia and previous surgery for esophagitis, diaphragmatic hernia or severe reflux or gastric or duodenal ulcer. The AUC was 0.84 (95% confidence interval [CI] 0.81-0.87) and slightly lower after cross-validation. A simpler model, based only on reflux symptoms or use of antireflux medication, BMI and tobacco smoking could explain 91% of the case patients with EAC and had an AUC of 0.82 (95% CI 0.78-0.85). These EAC prediction models showed good discriminative accuracy, but need to be validated in other populations. These models have the potential for future use in identifying individuals with high absolute risk of EAC in the population, who may be considered for endoscopic screening and targeted prevention. PMID:26756848

  12. Predicting errors from patterns of event-related potentials preceding an overt response.

    PubMed

    Bode, Stefan; Stahl, Jutta

    2014-12-01

    Everyday actions often require fast and efficient error detection and error correction. For this, the brain has to accumulate evidence for errors as soon as it becomes available. This study used multivariate pattern classification techniques for event-related potentials to track the accumulation of error-related brain activity before an overt response was made. Upcoming errors in a digit-flanker task could be predicted after the initiation of an erroneous motor response, ∼90ms before response execution. Channels over motor and parieto-occipital cortices were most important for error prediction, suggesting ongoing perceptual analyses and comparisons of initiated and appropriate motor programmes. Lower response force on error trials as compared to correct trials was observed, which indicates that this early error information was used for attempts to correct for errors before the overt response was made. In summary, our results suggest an early, automatic accumulation of error-related information, providing input for fast correction processes. PMID:25450163

  13. A Predictive Approach to Eliminating Errors in Software Code

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA s Metrics Data Program Data Repository is a database that stores problem, product, and metrics data. The primary goal of this data repository is to provide project data to the software community. In doing so, the Metrics Data Program collects artifacts from a large NASA dataset, generates metrics on the artifacts, and then generates reports that are made available to the public at no cost. The data that are made available to general users have been sanitized and authorized for publication through the Metrics Data Program Web site by officials representing the projects from which the data originated. The data repository is operated by NASA s Independent Verification and Validation (IV&V) Facility, which is located in Fairmont, West Virginia, a high-tech hub for emerging innovation in the Mountain State. The IV&V Facility was founded in 1993, under the NASA Office of Safety and Mission Assurance, as a direct result of recommendations made by the National Research Council and the Report of the Presidential Commission on the Space Shuttle Challenger Accident. Today, under the direction of Goddard Space Flight Center, the IV&V Facility continues its mission to provide the highest achievable levels of safety and cost-effectiveness for mission-critical software. By extending its data to public users, the facility has helped improve the safety, reliability, and quality of complex software systems throughout private industry and other government agencies. Integrated Software Metrics, Inc., is one of the organizations that has benefited from studying the metrics data. As a result, the company has evolved into a leading developer of innovative software-error prediction tools that help organizations deliver better software, on time and on budget.

  14. Seismic trace interpolation with nonstationary prediction-error filters

    NASA Astrophysics Data System (ADS)

    Crawley, Sean Edan

    Theory predicts that time and space domain prediction-error filters (PEFs) may be used to interpolate aliased signals. I explore the utility of the theory, applying PEF-based interpolation to aliased seismic field data, to dealias it without lowpass filtering by inserting new traces between those originally recorded. But before theoretical potential is realized on 3-D field data, some practical aspects must be addressed. Most importantly, while PEF theory assumes stationarity, seismic data are not stationary. We can divide the data into assumed-stationary patches, as is often done in other interpolation algorithms. We interpolate with PEFs in patches, and get near-perfect results in those parts of the data where events are mostly local plane waves, lying along straight lines. However, we find that the results are unimpressive where the data are noticeably curved. As an alternative to assumed-stationary patches, I calculate PEFs everywhere in the data, and force filters which are calculated at adjacent coordinates in data space to be similar to each other. The result is a set of smoothly-varying PEFs, which we call adaptive or nonstationary. The coefficients of the adaptive PEFs constitute a large model space. Using SEP's helical coordinate, we precondition the filter calculation problem so that it converges in manageable time. To address the difficult problem of curved events not fitting the plane wave model, we can control the degree of smoothness in the filters as a function of direction in data coordinates. To get statistically robust filter estimates, we want to maximize the area in data space over which we estimate a filter, while still approximately honoring stationarity. The local dip spectrum on a CMP gather is nearly constant in a region which is elongated in the radial direction, so I estimate PEFs that are smooth along radial lines but which may vary quickly with radial angle. In principle that addresses the curvature issue, and I find it performs well

  15. Tropical Cyclone Intensity Forecast Error Predictions and Their Applications

    NASA Astrophysics Data System (ADS)

    Bhatia, Kieran T.

    This dissertation aims to improve tropical cyclone (TC) intensity forecasts by exploring the connection between intensity forecast error and parameters representing initial condition uncertainty, atmospheric flow stability, TC strength, and the large-scale environment surrounding a TC. After assessing which of these parameters have robust relationships with error, a set of predictors are selected to develop a priori estimates of intensity forecast accuracy for Atlantic basin TCs. The applications of these forecasts are then discussed, including a multimodel ensemble that unequally weights different intensity models according to the situation. The ultimate goal is to produce skillful forecasts of TC intensity error and use their output to enhance intensity forecasts.

  16. Classifying and Predicting Errors of Inpatient Medication Reconciliation

    PubMed Central

    Pippins, Jennifer R.; Gandhi, Tejal K.; Hamann, Claus; Ndumele, Chima D.; Labonville, Stephanie A.; Diedrichsen, Ellen K.; Carty, Marcy G.; Karson, Andrew S.; Bhan, Ishir; Coley, Christopher M.; Liang, Catherine L.; Turchin, Alexander; McCarthy, Patricia C.

    2008-01-01

    Background Failure to reconcile medications across transitions in care is an important source of potential harm to patients. Little is known about the predictors of unintentional medication discrepancies and how, when, and where they occur. Objective To determine the reasons, timing, and predictors of potentially harmful medication discrepancies. Design Prospective observational study. Patients Admitted general medical patients. Measurements Study pharmacists took gold-standard medication histories and compared them with medical teams’ medication histories, admission and discharge orders. Blinded teams of physicians adjudicated all unexplained discrepancies using a modification of an existing typology. The main outcome was the number of potentially harmful unintentional medication discrepancies per patient (potential adverse drug events or PADEs). Results Among 180 patients, 2066 medication discrepancies were identified, and 257 (12%) were unintentional and had potential for harm (1.4 per patient). Of these, 186 (72%) were due to errors taking the preadmission medication history, while 68 (26%) were due to errors reconciling the medication history with discharge orders. Most PADEs occurred at discharge (75%). In multivariable analyses, low patient understanding of preadmission medications, number of medication changes from preadmission to discharge, and medication history taken by an intern were associated with PADEs. Conclusions Unintentional medication discrepancies are common and more often due to errors taking an accurate medication history than errors reconciling this history with patient orders. Focusing on accurate medication histories, on potential medication errors at discharge, and on identifying high-risk patients for more intensive interventions may improve medication safety during and after hospitalization. PMID:18563493

  17. Some Results on Mean Square Error for Factor Score Prediction

    ERIC Educational Resources Information Center

    Krijnen, Wim P.

    2006-01-01

    For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…

  18. Synergies in Astrometry: Predicting Navigational Error of Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Gessner Stewart, Susan

    2015-08-01

    Celestial navigation can employ a number of bright stars which are in binary systems. Often these are unresolved, appearing as a single, center-of-light object. A number of these systems are, however, in wide systems which could introduce a margin of error in the navigation solution if not handled properly. To illustrate the importance of good orbital solutions for binary systems - as well as good astrometry in general - the relationship between the center-of-light versus individual catalog position of celestial bodies and the error in terrestrial position derived via celestial navigation is demonstrated. From the list of navigational binary stars, fourteen such binary systems with at least 3.0 arcseconds apparent separation are explored. Maximum navigational error is estimated under the assumption that the bright star in the pair is observed at maximum separation, but the center-of-light is employed in the navigational solution. The relationships between navigational error and separation, orbital periods, and observers' latitude are discussed.

  19. A case study of the error growth and predictability of a Meiyu frontal heavy precipitation event

    NASA Astrophysics Data System (ADS)

    Luo, Yu; Zhang, Lifeng

    2011-08-01

    The Advanced Regional Eta-coordinate Model (AREM) is used to explore the predictability of a heavy rainfall event along the Meiyu front in China during 3-4 July 2003. Based on the sensitivity of precipitation prediction to initial data sources and initial uncertainties in different variables, the evolution of error growth and the associated mechanism are described and discussed in detail in this paper. The results indicate that the smaller-amplitude initial error presents a faster growth rate and its growth is characterized by a transition from localized growth to widespread expansion error. Such modality of the error growth is closely related to the evolvement of the precipitation episode, and consequently remarkable forecast divergence is found near the rainband, indicating that the rainfall area is a sensitive region for error growth. The initial error in the rainband contributes significantly to the forecast divergence, and its amplification and propagation are largely determined by the initial moisture distribution. The moisture condition also affects the error growth on smaller scales and the subsequent upscale error cascade. In addition, the error growth defined by an energy norm reveals that large error energy collocates well with the strong latent heating, implying that the occurrence of precipitation and error growth share the same energy source—the latent heat. This may impose an intrinsic predictability limit on the prediction of heavy precipitation.

  20. Prediction Error Associated with the Perceptual Segmentation of Naturalistic Events

    ERIC Educational Resources Information Center

    Zacks, Jeffrey M.; Kurby, Christopher A.; Eisenberg, Michelle L.; Haroutunian, Nayiri

    2011-01-01

    Predicting the near future is important for survival and plays a central role in theories of perception, language processing, and learning. Prediction failures may be particularly important for initiating the updating of perceptual and memory systems and, thus, for the subjective experience of events. Here, we asked observers to make predictions…

  1. Error propagation for velocity and shear stress prediction using 2D models for environmental management

    NASA Astrophysics Data System (ADS)

    Pasternack, Gregory B.; Gilbert, Andrew T.; Wheaton, Joseph M.; Buckland, Evan M.

    2006-08-01

    SummaryResource managers, scientists, government regulators, and stakeholders are considering sophisticated numerical models for managing complex environmental problems. In this study, observations from a river-rehabilitation experiment involving gravel augmentation and spawning habitat enhancement were used to assess sources and magnitudes of error in depth, velocity, and shear velocity predictions made at the 1-m scale with a commercial two-dimensional (depth-averaged) model. Error in 2D model depth prediction averaged 21%. This error was attributable to topographic survey resolution, which at 1 pt per 1.14 m 2, was inadequate to resolve small humps and depressions influencing point measurements. Error in 2D model velocity prediction averaged 29%. More than half of this error was attributable to depth prediction error. Despite depth and velocity error, 56% of tested 2D model predictions of shear velocity were within the 95% confidence limit of the best field-based estimation method. Ninety percent of the error in shear velocity prediction was explained by velocity prediction error. Multiple field-based estimates of shear velocity differed by up to 160%, so the lower error for the 2D model's predictions suggests such models are at least as accurate as field measurement. 2D models enable detailed, spatially distributed estimates compared to the small number measurable in a field campaign of comparable cost. They also can be used for design evaluation. Although such numerical models are limited to channel types adhering to model assumptions and yield predictions only accurate to ˜20-30%, they can provide a useful tool for river-rehabilitation design and assessment, including spatially diverse habitat heterogeneity as well as for pre- and post-project appraisal.

  2. Error criteria for cross validation in the context of chaotic time series prediction

    NASA Astrophysics Data System (ADS)

    Lim, Teck Por; Puthusserypady, Sadasivan

    2006-03-01

    The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.

  3. Role of parameter errors in the spring predictability barrier for ENSO events in the Zebiak-Cane model

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Mu, Mu; Yu, Yanshan

    2014-05-01

    The impact of both initial and parameter errors on the spring predictability barrier (SPB) is investigated using the Zebiak-Cane model (ZC model). Previous studies have shown that initial errors contribute more to the SPB than parameter errors in the ZC model. Although parameter errors themselves are less important, there is a possibility that nonlinear interactions can occur between the two types of errors, leading to larger prediction errors compared with those induced by initial errors alone. In this case, the impact of parameter errors cannot be overlooked. In the present paper, the optimal combination of these two types of errors [i.e., conditional nonlinear optimal perturbation (CNOP) errors] is calculated to investigate whether this optimal error combination may cause a more notable SPB phenomenon than that caused by initial errors alone. Using the CNOP approach, the CNOP errors and CNOP-I errors (optimal errors when only initial errors are considered) are calculated and then three aspects of error growth are compared: (1) the tendency of the seasonal error growth; (2) the prediction error of the sea surface temperature anomaly; and (3) the pattern of error growth. All three aspects show that the CNOP errors do not cause a more significant SPB than the CNOP-I errors. Therefore, this result suggests that we could improve the prediction of the El Niño during spring by simply focusing on reducing the initial errors in this model.

  4. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  5. Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions

    PubMed Central

    Corlett, P. R.; Murray, G. K.; Honey, G. D.; Aitken, M. R. F.; Shanks, D. R.; Robbins, T.W.; Bullmore, E.T.; Dickinson, A.; Fletcher, P. C.

    2012-01-01

    Delusions are maladaptive beliefs about the world. Based upon experimental evidence that prediction error—a mismatch between expectancy and outcome—drives belief formation, this study examined the possibility that delusions form because of disrupted prediction-error processing. We used fMRI to determine prediction-error-related brain responses in 12 healthy subjects and 12 individuals (7 males) with delusional beliefs. Frontal cortex responses in the patient group were suggestive of disrupted prediction-error processing. Furthermore, across subjects, the extent of disruption was significantly related to an individual’s propensity to delusion formation. Our results support a neurobiological theory of delusion formation that implicates aberrant prediction-error signalling, disrupted attentional allocation and associative learning in the formation of delusional beliefs. PMID:17690132

  6. Comparison of Transmission Error Predictions with Noise Measurements for Several Spur and Helical Gears

    NASA Technical Reports Server (NTRS)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-01-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  7. The Effect of Retrospective Sampling on Estimates of Prediction Error for Multifactor Dimensionality Reduction

    PubMed Central

    Winham, Stacey J.; Motsinger-Reif, Alison A.

    2010-01-01

    SUMMARY The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates. PMID:20560921

  8. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  9. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  10. Adaptive Prediction Error Coding in the Human Midbrain and Striatum Facilitates Behavioral Adaptation and Learning Efficiency.

    PubMed

    Diederen, Kelly M J; Spencer, Tom; Vestergaard, Martin D; Fletcher, Paul C; Schultz, Wolfram

    2016-06-01

    Effective error-driven learning benefits from scaling of prediction errors to reward variability. Such behavioral adaptation may be facilitated by neurons coding prediction errors relative to the standard deviation (SD) of reward distributions. To investigate this hypothesis, we required participants to predict the magnitude of upcoming reward drawn from distributions with different SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. In line with the notion of adaptive coding, BOLD response slopes in the Substantia Nigra/Ventral Tegmental Area (SN/VTA) and ventral striatum were steeper for prediction errors occurring in distributions with smaller SDs. SN/VTA adaptation was not instantaneous but developed across trials. Adaptive prediction error coding was paralleled by behavioral adaptation, as reflected by SD-dependent changes in learning rate. Crucially, increased SN/VTA and ventral striatal adaptation was related to improved task performance. These results suggest that adaptive coding facilitates behavioral adaptation and supports efficient learning. PMID:27181060

  11. Intermediate time error growth and predictability: tropics versus mid-latitudes

    NASA Astrophysics Data System (ADS)

    Straus, David M.; Paolino, Dan

    2009-10-01

    The evolution of identical twin errors from an atmospheric general circulation model is studied in the linear range (small errors) through intermediate times and the approach to saturation. Between forecast day 1 and 7, the normalized error variance in the tropics is similar to that at higher latitudes. After that, tropical errors grow more slowly. The predictability time τ taken for tropical errors to reach half their saturation values is larger than that for mid-latitudes, especially for the planetary waves, thus implying greater potential predictability in the tropics. The discrepancy between mid-latitude and tropical τ is more pronounced at 850 hPa than at 200 hPa, is largest for the planetary waves, and is more pronounced for errors arising from wave phase differences (than from wave amplitude differences). The spectra of the error in 200 hPa zonal wind show that for forecast times up to about 5 d, the tropical error peaks at much shorter scales than the mid-latitude errors, but that subsequently tropical and mid-latitude error spectra look increasingly similar. The difference between upper and lower level tropical τ may be due to the greater influence of mid-latitudes at the upper levels.

  12. Similarities in error processing establish a link between saccade prediction at baseline and adaptation performance

    PubMed Central

    Shelhamer, Mark

    2014-01-01

    Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. PMID:24598520

  13. Artificial neural network implementation of a near-ideal error prediction controller

    NASA Technical Reports Server (NTRS)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error

  14. Dopamine reward prediction-error signalling: a two-component response.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020

  15. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  16. Long-Period Ground Motion Prediction Equations for Relative, Pseudo-Relative and Absolute Velocity Response Spectra in Japan

    NASA Astrophysics Data System (ADS)

    Dhakal, Y. P.; Kunugi, T.; Suzuki, W.; Aoi, S.

    2014-12-01

    Many of the empirical ground motion prediction equations (GMPE) also known as attenuation relations have been developed for absolute acceleration or pseudo relative velocity response spectra. For a small damping, pseudo and absolute acceleration response spectra are nearly identical and hence interchangeable. It is generally known that the relative and pseudo relative velocity response spectra differ considerably at very short or very long periods, and the two are often considered similar at intermediate periods. However, observations show that the period range at which the two spectra become comparable is different from site to site. Also, the relationship of the above two types of velocity response spectra with absolute velocity response spectra are not discussed well in literature. The absolute velocity response spectra are the peak values of time histories obtained by adding the ground velocities to relative velocity response time histories at individual natural periods. There exists many tall buildings on huge and deep sedimentary basins such as the Kanto basin, and the number of such buildings is growing. Recently, Japan Meteorological Agency (JMA) has proposed four classes of long-period ground motion intensity (http://www.data.jma.go.jp/svd/eew/data/ltpgm/) based on absolute velocity response spectra, which correlate to the difficulty of movement of people in tall buildings. As the researchers are using various types of response spectra for long-period ground motions, it is important to understand the relationships between them to take appropriate measures for disaster prevention applications. In this paper, we, therefore, obtain and discuss the empirical attenuation relationships using the same functional forms for the three types of velocity response spectra computed from observed strong motion records from moderate to large earthquakes in relation to JMA magnitude, hypocentral distance, sediment depths, and AVS30 as predictor variables at periods between

  17. Aircraft noise-induced awakenings are more reasonably predicted from relative than from absolute sound exposure levels.

    PubMed

    Fidell, Sanford; Tabachnick, Barbara; Mestre, Vincent; Fidell, Linda

    2013-11-01

    Assessment of aircraft noise-induced sleep disturbance is problematic for several reasons. Current assessment methods are based on sparse evidence and limited understandings; predictions of awakening prevalence rates based on indoor absolute sound exposure levels (SELs) fail to account for appreciable amounts of variance in dosage-response relationships and are not freely generalizable from airport to airport; and predicted awakening rates do not differ significantly from zero over a wide range of SELs. Even in conjunction with additional predictors, such as time of night and assumed individual differences in "sensitivity to awakening," nominally SEL-based predictions of awakening rates remain of limited utility and are easily misapplied and misinterpreted. Probabilities of awakening are more closely related to SELs scaled in units of standard deviates of local distributions of aircraft SELs, than to absolute sound levels. Self-selection of residential populations for tolerance of nighttime noise and habituation to airport noise environments offer more parsimonious and useful explanations for differences in awakening rates at disparate airports than assumed individual differences in sensitivity to awakening. PMID:24180775

  18. An MEG signature corresponding to an axiomatic model of reward prediction error.

    PubMed

    Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J

    2012-01-01

    Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. PMID:21726648

  19. Prediction error in reinforcement learning: a meta-analysis of neuroimaging studies.

    PubMed

    Garrison, Jane; Erdeniz, Burak; Done, John

    2013-08-01

    Activation likelihood estimation (ALE) meta-analyses were used to examine the neural correlates of prediction error in reinforcement learning. The findings are interpreted in the light of current computational models of learning and action selection. In this context, particular consideration is given to the comparison of activation patterns from studies using instrumental and Pavlovian conditioning, and where reinforcement involved rewarding or punishing feedback. The striatum was the key brain area encoding for prediction error, with activity encompassing dorsal and ventral regions for instrumental and Pavlovian reinforcement alike, a finding which challenges the functional separation of the striatum into a dorsal 'actor' and a ventral 'critic'. Prediction error activity was further observed in diverse areas of predominantly anterior cerebral cortex including medial prefrontal cortex and anterior cingulate cortex. Distinct patterns of prediction error activity were found for studies using rewarding and aversive reinforcers; reward prediction errors were observed primarily in the striatum while aversive prediction errors were found more widely including insula and habenula. PMID:23567522

  20. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  1. Space Weather Prediction Error Bounding for Real-Time Ionospheric Threat Adaptation of GNSS Augmentation Systems

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yoon, M.; Lee, J.

    2014-12-01

    Current Global Navigation Satellite Systems (GNSS) augmentation systems attempt to consider all possible ionospheric events in their correction computations of worst-case errors. This conservatism can be mitigated by subdividing anomalous conditions and using different values of ionospheric threat-model bounds for each class. A new concept of 'real-time ionospheric threat adaptation' that adjusts the threat model in real time instead of always using the same 'worst-case' model was introduced in my previous research. The concept utilizes predicted values of space weather indices for determining the corresponding threat model based on the pre-defined worst-case threat as a function of space weather indices. Since space weather prediction is not reliable due to prediction errors, prediction errors are needed to be bounded to the required level of integrity of the system being supported. The previous research performed prediction error bounding using disturbance, storm time (Dst) index. The distribution of Dst prediction error over the 15-year data was bounded by applying 'inflated-probability density function (pdf) Gaussian bounding'. Since the error distribution has thick and non-Gaussian tails, investigation on statistical distributions which properly describe heavy tails with less conservatism is required for the system performance. This paper suggests two potential approaches for improving space weather prediction error bounding. First, we suggest using different statistical models when fit the error distribution, such as the Laplacian distribution which has fat tails, and the folded Gaussian cumulative distribution function (cdf) distribution. Second approach is to bound the error distribution by segregating data based on the overall level of solar activity. Bounding errors using only solar minimum period data will have less uncertainty and it may allow the use of 'solar cycle prediction' provided by NASA when implementing to real-time threat adaptation. Lastly

  2. A Bayesian approach to improved calibration and prediction of groundwater models with structural error

    NASA Astrophysics Data System (ADS)

    Xu, Tianfang; Valocchi, Albert J.

    2015-11-01

    Numerical groundwater flow and solute transport models are usually subject to model structural error due to simplification and/or misrepresentation of the real system, which raises questions regarding the suitability of conventional least squares regression-based (LSR) calibration. We present a new framework that explicitly describes the model structural error statistically in an inductive, data-driven way. We adopt a fully Bayesian approach that integrates Gaussian process error models into the calibration, prediction, and uncertainty analysis of groundwater flow models. We test the usefulness of the fully Bayesian approach with a synthetic case study of the impact of pumping on surface-ground water interaction. We illustrate through this example that the Bayesian parameter posterior distributions differ significantly from parameters estimated by conventional LSR, which does not account for model structural error. For the latter method, parameter compensation for model structural error leads to biased, overconfident prediction under changing pumping condition. In contrast, integrating Gaussian process error models significantly reduces predictive bias and leads to prediction intervals that are more consistent with validation data. Finally, we carry out a generalized LSR recalibration step to assimilate the Bayesian prediction while preserving mass conservation and other physical constraints, using a full error covariance matrix obtained from Bayesian results. It is found that the recalibrated model achieved lower predictive bias compared to the model calibrated using conventional LSR. The results highlight the importance of explicit treatment of model structural error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification.

  3. Mismatch negativity encoding of prediction errors predicts S-ketamine-induced cognitive impairments.

    PubMed

    Schmidt, André; Bachmann, Rosilla; Kometer, Michael; Csomor, Philipp A; Stephan, Klaas E; Seifritz, Erich; Vollenweider, Franz X

    2012-03-01

    Psychotomimetics like the N-methyl-D-aspartate receptor (NMDAR) antagonist ketamine and the 5-hydroxytryptamine2A receptor (5-HT(2A)R) agonist psilocybin induce psychotic symptoms in healthy volunteers that resemble those of schizophrenia. Recent theories of psychosis posit that aberrant encoding of prediction errors (PE) may underlie the expression of psychotic symptoms. This study used a roving mismatch negativity (MMN) paradigm to investigate whether the encoding of PE is affected by pharmacological manipulation of NMDAR or 5-HT(2A)R, and whether the encoding of PE under placebo can be used to predict drug-induced symptoms. Using a double-blind within-subject placebo-controlled design, S-ketamine and psilocybin, respectively, were administrated to two groups of healthy subjects. Psychological alterations were assessed using a revised version of the Altered States of Consciousness (ASC-R) questionnaire. As an index of PE, we computed changes in MMN amplitudes as a function of the number of preceding standards (MMN memory trace effect) during a roving paradigm. S-ketamine, but not psilocybin, disrupted PE processing as expressed by a frontally disrupted MMN memory trace effect. Although both drugs produced positive-like symptoms, the extent of PE processing under placebo only correlated significantly with the severity of cognitive impairments induced by S-ketamine. Our results suggest that the NMDAR, but not the 5-HT(2A)R system, is implicated in PE processing during the MMN paradigm, and that aberrant PE signaling may contribute to the formation of cognitive impairments. The assessment of the MMN memory trace in schizophrenia may allow detecting early phases of the illness and might also serve to assess the efficacy of novel pharmacological treatments, in particular of cognitive impairments. PMID:22030715

  4. Fronto-temporal white matter connectivity predicts reversal learning errors

    PubMed Central

    Alm, Kylie H.; Rolheiser, Tyler; Mohamed, Feroze B.; Olson, Ingrid R.

    2015-01-01

    Each day, we make hundreds of decisions. In some instances, these decisions are guided by our innate needs; in other instances they are guided by memory. Probabilistic reversal learning tasks exemplify the close relationship between decision making and memory, as subjects are exposed to repeated pairings of a stimulus choice with a reward or punishment outcome. After stimulus–outcome associations have been learned, the associated reward contingencies are reversed, and participants are not immediately aware of this reversal. Individual differences in the tendency to choose the previously rewarded stimulus reveal differences in the tendency to make poorly considered, inflexible choices. Lesion studies have strongly linked reversal learning performance to the functioning of the orbitofrontal cortex, the hippocampus, and in some instances, the amygdala. Here, we asked whether individual differences in the microstructure of the uncinate fasciculus, a white matter tract that connects anterior and medial temporal lobe regions to the orbitofrontal cortex, predict reversal learning performance. Diffusion tensor imaging and behavioral paradigms were used to examine this relationship in 33 healthy young adults. The results of tractography revealed a significant negative relationship between reversal learning performance and uncinate axial diffusivity, but no such relationship was demonstrated in a control tract, the inferior longitudinal fasciculus. Our findings suggest that the uncinate might serve to integrate associations stored in the anterior and medial temporal lobes with expectations about expected value based on feedback history, computed in the orbitofrontal cortex. PMID:26150776

  5. Glutamatergic Model Psychoses: Prediction Error, Learning, and Inference

    PubMed Central

    Corlett, Philip R; Honey, Garry D; Krystal, John H; Fletcher, Paul C

    2011-01-01

    Modulating glutamatergic neurotransmission induces alterations in conscious experience that mimic the symptoms of early psychotic illness. We review studies that use intravenous administration of ketamine, focusing on interindividual variability in the profundity of the ketamine experience. We will consider this individual variability within a hypothetical model of brain and cognitive function centered upon learning and inference. Within this model, the brains, neural systems, and even single neurons specify expectations about their inputs and responding to violations of those expectations with new learning that renders future inputs more predictable. We argue that ketamine temporarily deranges this ability by perturbing both the ways in which prior expectations are specified and the ways in which expectancy violations are signaled. We suggest that the former effect is predominantly mediated by NMDA blockade and the latter by augmented and inappropriate feedforward glutamatergic signaling. We suggest that the observed interindividual variability emerges from individual differences in neural circuits that normally underpin the learning and inference processes described. The exact source for that variability is uncertain, although it is likely to arise not only from genetic variation but also from subjects' previous experiences and prior learning. Furthermore, we argue that chronic, unlike acute, NMDA blockade alters the specification of expectancies more profoundly and permanently. Scrutinizing individual differences in the effects of acute and chronic ketamine administration in the context of the Bayesian brain model may generate new insights about the symptoms of psychosis; their underlying cognitive processes and neurocircuitry. PMID:20861831

  6. The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors

    SciTech Connect

    Duffey, Romney B.; Saull, John W.

    2006-07-01

    Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum

  7. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  8. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  9. Predicting Absolute Risk of Type 2 Diabetes Using Age and Waist Circumference Values in an Aboriginal Australian Community

    PubMed Central

    2015-01-01

    Objectives To predict in an Australian Aboriginal community, the 10-year absolute risk of type 2 diabetes associated with waist circumference and age on baseline examination. Method A sample of 803 diabetes-free adults (82.3% of the age-eligible population) from baseline data of participants collected from 1992 to 1998 were followed-up for up to 20 years till 2012. The Cox-proportional hazard model was used to estimate the effects of waist circumference and other risk factors, including age, smoking and alcohol consumption status, of males and females on prediction of type 2 diabetes, identified through subsequent hospitalisation data during the follow-up period. The Weibull regression model was used to calculate the absolute risk estimates of type 2 diabetes with waist circumference and age as predictors. Results Of 803 participants, 110 were recorded as having developed type 2 diabetes, in subsequent hospitalizations over a follow-up of 12633.4 person-years. Waist circumference was strongly associated with subsequent diagnosis of type 2 diabetes with P<0.0001 for both genders and remained statistically significant after adjusting for confounding factors. Hazard ratios of type 2 diabetes associated with 1 standard deviation increase in waist circumference were 1.7 (95%CI 1.3 to 2.2) for males and 2.1 (95%CI 1.7 to 2.6) for females. At 45 years of age with baseline waist circumference of 100 cm, a male had an absolute diabetic risk of 10.9%, while a female had a 14.3% risk of the disease. Conclusions The constructed model predicts the 10-year absolute diabetes risk in an Aboriginal Australian community. It is simple and easily understood and will help identify individuals at risk of diabetes in relation to waist circumference values. Our findings on the relationship between waist circumference and diabetes on gender will be useful for clinical consultation, public health education and establishing WC cut-off points for Aboriginal Australians. PMID:25876058

  10. Elevated absolute monocyte count predicts unfavorable outcomes in patients with angioimmunoblastic T-cell lymphoma.

    PubMed

    Yang, Yu-Qiong; Liang, Jin-Hua; Wu, Jia-Zhu; Wang, Li; Qu, Xiao-Yan; Cao, Lei; Zhao, Xiao-Li; Huang, Dong-Ping; Fan, Lei; Li, Jian-Yong; Xu, Wei

    2016-03-01

    This study was aimed at investigating the prognostic significance of the absolute monocyte count (AMC) in peripheral blood in patients with newly diagnosed angioimmunoblastic T cell lymphoma (AITL). AMC was performed in 73 therapy-naive patients with AITL in 2 institutions during 2008-2015, and higher AMC was observed in those with extranodal sites >1, bone marrow involvement, high lactate dehydrogenase level, the EBV infection, no response to treatment and high IPI, PIT, PIAI score group. The best AMC cut-off level at diagnosis was 0.8×10(9)/L and the 3-year overall survival (OS) was 64% for patients with low AMC group (≤0.8×10(9)/L) compared to 10% in high AMC group (>0.8×10(9)/L) (P<0.001). Multivariate analysis showed that elevated AMC remained an adverse prognostic parameter. Our results suggest that AMC is an independent prognostic parameter for OS in patients with AITL, and AMC >0.8×10(9)/L can routinely be used to identify high-risk patients with unfavorable survival. PMID:26764222

  11. Medial–Frontal Stimulation Enhances Learning in Schizophrenia by Restoring Prediction Error Signaling

    PubMed Central

    Reinhart, Robert M.G.; Zhu, Julia

    2015-01-01

    Posterror learning, associated with medial–frontal cortical recruitment in healthy subjects, is compromised in neuropsychiatric disorders. Here we report novel evidence for the mechanisms underlying learning dysfunctions in schizophrenia. We show that, by noninvasively passing direct current through human medial–frontal cortex, we could enhance the event-related potential related to learning from mistakes (i.e., the error-related negativity), a putative index of prediction error signaling in the brain. Following this causal manipulation of brain activity, the patients learned a new task at a rate that was indistinguishable from healthy individuals. Moreover, the severity of delusions interacted with the efficacy of the stimulation to improve learning. Our results demonstrate a causal link between disrupted prediction error signaling and inefficient learning in schizophrenia. These findings also demonstrate the feasibility of nonpharmacological interventions to address cognitive deficits in neuropsychiatric disorders. SIGNIFICANCE STATEMENT When there is a difference between what we expect to happen and what we actually experience, our brains generate a prediction error signal, so that we can map stimuli to responses and predict outcomes accurately. Theories of schizophrenia implicate abnormal prediction error signaling in the cognitive deficits of the disorder. Here, we combine noninvasive brain stimulation with large-scale electrophysiological recordings to establish a causal link between faulty prediction error signaling and learning deficits in schizophrenia. We show that it is possible to improve learning rate, as well as the neural signature of prediction error signaling, in patients to a level quantitatively indistinguishable from that of healthy subjects. The results provide mechanistic insight into schizophrenia pathophysiology and suggest a future therapy for this condition. PMID:26338333

  12. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times.

    PubMed

    Cavanagh, James F

    2015-04-15

    Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making. PMID:25676913

  13. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    NASA Astrophysics Data System (ADS)

    Larson, Sarah M.; Kirtman, Ben P.

    2016-07-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  14. High Capacity Reversible Watermarking for Audio by Histogram Shifting and Predicted Error Expansion

    PubMed Central

    Wang, Fei; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability. PMID:25097883

  15. Usefulness of the Sum Absolute QRST Integral to Predict Outcomes in Patients Receiving Cardiac Resynchronization Therapy.

    PubMed

    Jacobsson, Jonatan; Borgquist, Rasmus; Reitan, Christian; Ghafoori, Elyar; Chatterjee, Neal A; Kabir, Muammar; Platonov, Pyotr G; Carlson, Jonas; Singh, Jagmeet P; Tereshchenko, Larisa G

    2016-08-01

    Cardiac resynchronization therapy (CRT) reduces mortality and morbidity in selected patients with heart failure (HF), but up to 1/3 of patients are nonresponders. Sum absolute QRST integral (SAI QRST) recently showed association with mechanical response on CRT. However, it is unknown whether SAI QRST is associated with all-cause mortality and HF hospitalizations in patients undergoing CRT. The study population included 496 patients undergoing CRT (mean age 69 ± 10 years, 84% men, 65% left bundle branch block [LBBB], left ventricular ejection fraction 23 ± 6%, 63% ischemic cardiomyopathy). Preimplant digital 12-lead electrocardiogram was transformed into orthogonal XYZ electrocardiogram. SAI QRST was measured as an arithmetic sum of areas under the QRST curve on XYZ leads and was dichotomized based on the median value (302 mV ms). All-cause mortality served as the primary end point. A composite of 2-year all-cause mortality, heart transplant, and HF hospitalization was a secondary end point. Cox regression models were adjusted for known predictors of CRT response. Patients with preimplant low mean SAI QRST had an increased risk of both the primary (hazard ratio [HR] 1.8, 95% CI 1.01 to 3.2) and secondary (HR 1.6, 95% CI 1.1 to 2.2) end points after multivariate adjustment. SAI QRST was associated with secondary outcome in subgroups of patients with LBBB (HR 2.1, 95% CI 1.5 to 3.0) and with non-LBBB (HR 1.7, 95% CI 1.0 to 2.6). In patients undergoing CRT, preimplant SAI QRST <302 mV ms was associated with an increased risk of all-cause mortality and HF hospitalization. After validation in another prospective cohort, SAI QRST may help to refine selection of CRT recipients. PMID:27265674

  16. Quantifying the Effect of Lidar Turbulence Error on Wind Power Prediction

    SciTech Connect

    Newman, Jennifer F.; Clifton, Andrew

    2016-01-01

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST

  17. Evidence that conditioned avoidance responses are reinforced by positive prediction errors signaled by tonic striatal dopamine.

    PubMed

    Dombrowski, Patricia A; Maia, Tiago V; Boschen, Suelen L; Bortolanza, Mariza; Wendler, Etieli; Schwarting, Rainer K W; Brandão, Marcus Lira; Winn, Philip; Blaha, Charles D; Da Cunha, Claudio

    2013-03-15

    We conducted an experiment in which hedonia, salience and prediction error hypotheses predicted different patterns of dopamine (DA) release in the striatum during learning of conditioned avoidance responses (CARs). The data strongly favor the latter hypothesis. It predicts that during learning of the 2-way active avoidance CAR task, positive prediction errors generated when rats do not receive an anticipated footshock (which is better than expected) cause DA release that reinforces the instrumental avoidance action. In vivo microdialysis in the rat striatum showed that extracellular DA concentration increased during early CAR learning and decreased throughout training returning to baseline once the response was well learned. In addition, avoidance learning was proportional to the degree of DA release. Critically, exposure of rats to the same stimuli but in an unpredictable, unavoidable, and inescapable manner, did not produce alterations from baseline DA levels as predicted by the prediction error but not hedonic or salience hypotheses. In addition, rats with a partial lesion of substantia nigra DA neurons, which did not show increased DA levels during learning, failed to learn this task. These data represent clear and unambiguous evidence that it was the factor positive prediction error, and not hedonia or salience, which caused increase in the tonic level of striatal DA and which reinforced learning of the instrumental avoidance response. PMID:22771418

  18. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  19. Cognitive Strategies Regulate Fictive, but not Reward Prediction Error Signals in a Sequential Investment Task

    PubMed Central

    Gu, Xiaosi; Kirk, Ulrich; Lohrenz, Terry M; Montague, P Read

    2014-01-01

    Computational models of reward processing suggest that foregone or fictive outcomes serve as important information sources for learning and augment those generated by experienced rewards (e.g. reward prediction errors). An outstanding question is how these learning signals interact with top-down cognitive influences, such as cognitive reappraisal strategies. Using a sequential investment task and functional magnetic resonance imaging, we show that the reappraisal strategy selectively attenuates the influence of fictive, but not reward prediction error signals on investment behavior; such behavioral effect is accompanied by changes in neural activity and connectivity in the anterior insular cortex, a brain region thought to integrate subjective feelings with high-order cognition. Furthermore, individuals differ in the extent to which their behaviors are driven by fictive errors versus reward prediction errors, and the reappraisal strategy interacts with such individual differences; a finding also accompanied by distinct underlying neural mechanisms. These findings suggest that the variable interaction of cognitive strategies with two important classes of computational learning signals (fictive, reward prediction error) represent one contributing substrate for the variable capacity of individuals to control their behavior based on foregone rewards. These findings also expose important possibilities for understanding the lack of control in addiction based on possibly foregone rewarding outcomes. Hum Brain Mapp 35:3738–3749, 2014. PMID:24382784

  20. Midbrain dopamine neurons compute inferred and cached value prediction errors in a common framework

    PubMed Central

    Sadacca, Brian F; Jones, Joshua L; Schoenbaum, Geoffrey

    2016-01-01

    Midbrain dopamine neurons have been proposed to signal reward prediction errors as defined in temporal difference (TD) learning algorithms. While these models have been extremely powerful in interpreting dopamine activity, they typically do not use value derived through inference in computing errors. This is important because much real world behavior – and thus many opportunities for error-driven learning – is based on such predictions. Here, we show that error-signaling rat dopamine neurons respond to the inferred, model-based value of cues that have not been paired with reward and do so in the same framework as they track the putative cached value of cues previously paired with reward. This suggests that dopamine neurons access a wider variety of information than contemplated by standard TD models and that, while their firing conforms to predictions of TD models in some cases, they may not be restricted to signaling errors from TD predictions. DOI: http://dx.doi.org/10.7554/eLife.13665.001 PMID:26949249

  1. Predicting Human Error in Air Traffic Control Decision Support Tools and Free Flight Concepts

    NASA Technical Reports Server (NTRS)

    Mogford, Richard; Kopardekar, Parimal

    2001-01-01

    The document is a set of briefing slides summarizing the work the Advanced Air Transportation Technologies (AATT) Project is doing on predicting air traffic controller and airline pilot human error when using new decision support software tools and when involved in testing new air traffic control concepts. Previous work in this area is reviewed as well as research being done jointly with the FAA. Plans for error prediction work in the AATT Project are discussed. The audience is human factors researchers and aviation psychologists from government and industry.

  2. Predictive error detection in pianists: a combined ERP and motion capture study

    PubMed Central

    Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari

    2013-01-01

    Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID

  3. Quantitative vapor-phase IR intensities and DFT computations to predict absolute IR spectra based on molecular structure: I. Alkanes

    NASA Astrophysics Data System (ADS)

    Williams, Stephen D.; Johnson, Timothy J.; Sharpe, Steven W.; Yavelak, Veronica; Oates, R. P.; Brauer, Carolyn S.

    2013-11-01

    Recently recorded quantitative IR spectra of a variety of gas-phase alkanes are shown to have integrated intensities in both the C3H stretching and C3H bending regions that depend linearly on the molecular size, i.e. the number of C3H bonds. This result is well predicted from CH4 to C15H32 by density functional theory (DFT) computations of IR spectra using Becke's three parameter functional (B3LYP/6-31+G(d,p)). Using the experimental data, a simple model predicting the absolute IR band intensities of alkanes based only on structural formula is proposed: For the C3H stretching band envelope centered near 2930 cm-1 this is given by (km/mol) CH_str=(34±1)×CH-(41±23) where CH is number of C3H bonds in the alkane. The linearity is explained in terms of coordinated motion of methylene groups rather than the summed intensities of autonomous -CH2-units. The effect of alkyl chain length on the intensity of a C3H bending mode is explored and interpreted in terms of conformer distribution. The relative intensity contribution of a methyl mode compared to the total C3H stretch intensity is shown to be linear in the number of methyl groups in the alkane, and can be used to predict quantitative spectra a priori based on structure alone.

  4. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  5. Integrated uncertainty assessment of discharge predictions with a statistical error model

    NASA Astrophysics Data System (ADS)

    Honti, M.; Stamm, C.; Reichert, P.

    2013-08-01

    A proper uncertainty assessment of rainfall-runoff predictions has always been an important objective for modelers. Several sources of uncertainty have been identified, but their representation was limited to complicated mechanistic error propagation frameworks only. The typical statistical error models used in the modeling practice still build on outdated and invalidated assumptions like the independence and homoscedasticity of model residuals and thus result in wrong uncertainty estimates. The primary reason for the popularity of the traditional faulty methods is the enormous computational requirement of full Bayesian error propagation frameworks. We introduce a statistical error model that can account for the effect of various uncertainty sources present in conceptual rainfall-runoff modeling studies and at the same time has limited computational demand. We split the model residuals into three different components: a random noise term and two bias processes with different response characteristics. The effects of the input uncertainty are simulated with a stochastic linearized rainfall-runoff model. While the description of model bias with Bayesian statistics cannot directly help to improve on the model's deficiencies, it is still beneficial to get realistic estimates on the overall predictive uncertainty and to rank the importance of different uncertainty sources. This feature is particularly important if the error sources cannot be addressed individually, but it is also relevant for the description of remaining bias when input and structural errors are considered explicitly.

  6. Surprise signals in the supplementary eye field: rectified prediction errors drive exploration-exploitation transitions.

    PubMed

    Kawaguchi, Norihiko; Sakamoto, Kazuhiro; Saito, Naohiro; Furusawa, Yoshito; Tanji, Jun; Aoki, Masashi; Mushiake, Hajime

    2015-02-01

    Visual search is coordinated adaptively by monitoring and predicting the environment. The supplementary eye field (SEF) plays a role in oculomotor control and outcome evaluation. However, it is not clear whether the SEF is involved in adjusting behavioral modes based on preceding feedback. We hypothesized that the SEF drives exploration-exploitation transitions by generating "surprise signals" or rectified prediction errors, which reflect differences between predicted and actual outcomes. To test this hypothesis, we introduced an oculomotor two-target search task in which monkeys were required to find two valid targets among four identical stimuli. After they detected the valid targets, they exploited their knowledge of target locations to obtain a reward by choosing the two valid targets alternately. Behavioral analysis revealed two distinct types of oculomotor search patterns: exploration and exploitation. We found that two types of SEF neurons represented the surprise signals. The error-surprise neurons showed enhanced activity when the monkey received the first error feedback after the target pair change, and this activity was followed by an exploratory oculomotor search pattern. The correct-surprise neurons showed enhanced activity when the monkey received the first correct feedback after an error trial, and this increased activity was followed by an exploitative, fixed-type search pattern. Our findings suggest that error-surprise neurons are involved in the transition from exploitation to exploration and that correct-surprise neurons are involved in the transition from exploration to exploitation. PMID:25411455

  7. Driving Errors in Parkinson’s Disease: Moving Closer to Predicting On-Road Outcomes

    PubMed Central

    Brumback, Babette; Monahan, Miriam; Malaty, Irene I.; Rodriguez, Ramon L.; Okun, Michael S.; McFarland, Nikolaus R.

    2014-01-01

    Age-related medical conditions such as Parkinson’s disease (PD) compromise driver fitness. Results from studies are unclear on the specific driving errors that underlie passing or failing an on-road assessment. In this study, we determined the between-group differences and quantified the on-road driving errors that predicted pass or fail on-road outcomes in 101 drivers with PD (mean age = 69.38 ± 7.43) and 138 healthy control (HC) drivers (mean age = 71.76 ± 5.08). Participants with PD had minor differences in demographics and driving habits and history but made more and different driving errors than HC participants. Drivers with PD failed the on-road test to a greater extent than HC drivers (41% vs. 9%), χ2(1) = 35.54, HC N = 138, PD N = 99, p < .001. The driving errors predicting on-road pass or fail outcomes (95% confidence interval, Nagelkerke R2 =.771) were made in visual scanning, signaling, vehicle positioning, speeding (mainly underspeeding, t(61) = 7.004, p < .001, and total errors. Although it is difficult to predict on-road outcomes, this study provides a foundation for doing so. PMID:24367958

  8. A New Local Modelling Approach Based on Predicted Errors for Near-Infrared Spectral Analysis

    PubMed Central

    Chang, Haitao; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    Over the last decade, near-infrared spectroscopy, together with the use of chemometrics models, has been widely employed as an analytical tool in several industries. However, most chemical processes or analytes are multivariate and nonlinear in nature. To solve this problem, local errors regression method is presented in order to build an accurate calibration model in this paper, where a calibration subset is selected by a new similarity criterion which takes the full information of spectra, chemical property, and predicted errors. After the selection of calibration subset, the partial least squares regression is applied to build calibration model. The performance of the proposed method is demonstrated through a near-infrared spectroscopy dataset of pharmaceutical tablets. Compared with other local strategies with different similarity criterions, it has been shown that the proposed local errors regression can result in a significant improvement in terms of both prediction ability and calculation speed. PMID:27446631

  9. Prediction Error Demarcates the Transition from Retrieval, to Reconsolidation, to New Learning

    ERIC Educational Resources Information Center

    Sevenster, Dieuwke; Beckers, Tom; Kindt, Merel

    2014-01-01

    Although disrupting reconsolidation is promising in targeting emotional memories, the conditions under which memory becomes labile are still unclear. The current study showed that post-retrieval changes in expectancy as an index for prediction error may serve as a read-out for the underlying processes engaged by memory reactivation. Minor…

  10. Toward a reliable decomposition of predictive uncertainty in hydrological modeling: Characterizing rainfall errors using conditional simulation

    NASA Astrophysics Data System (ADS)

    Renard, Benjamin; Kavetski, Dmitri; Leblois, Etienne; Thyer, Mark; Kuczera, George; Franks, Stewart W.

    2011-11-01

    This study explores the decomposition of predictive uncertainty in hydrological modeling into its contributing sources. This is pursued by developing data-based probability models describing uncertainties in rainfall and runoff data and incorporating them into the Bayesian total error analysis methodology (BATEA). A case study based on the Yzeron catchment (France) and the conceptual rainfall-runoff model GR4J is presented. It exploits a calibration period where dense rain gauge data are available to characterize the uncertainty in the catchment average rainfall using geostatistical conditional simulation. The inclusion of information about rainfall and runoff data uncertainties overcomes ill-posedness problems and enables simultaneous estimation of forcing and structural errors as part of the Bayesian inference. This yields more reliable predictions than approaches that ignore or lump different sources of uncertainty in a simplistic way (e.g., standard least squares). It is shown that independently derived data quality estimates are needed to decompose the total uncertainty in the runoff predictions into the individual contributions of rainfall, runoff, and structural errors. In this case study, the total predictive uncertainty appears dominated by structural errors. Although further research is needed to interpret and verify this decomposition, it can provide strategic guidance for investments in environmental data collection and/or modeling improvement. More generally, this study demonstrates the power of the Bayesian paradigm to improve the reliability of environmental modeling using independent estimates of sampling and instrumental data uncertainties.

  11. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  12. Practical guidance on representing the heteroscedasticity of residual errors of hydrological predictions

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George

    2016-04-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).

  13. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    SciTech Connect

    Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta; Hudson, Kathy; Tourassi, Georgia

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  14. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    SciTech Connect

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  15. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  16. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  17. The influence of actual and apparent geoid error on ocean analysis and prediction

    NASA Technical Reports Server (NTRS)

    Thompson, J. D.

    1984-01-01

    The radar altimeter is the only satellite remote sensor with a proven capability for synoptically measuring an integral property of the dynamic ocean on a near global, all weather basis. Because acquisition of global, in situ ocean data with space/time resolution adequate to describe dynamically important ocean features is practically impossible, any attempt to develop a global ocean monitoring and forecasting system will rely heavily on altimetric data for initialization and updating. Maximizing useful information from the altimeter while minimizing error sources and developing methods for assimilating altimeter data into dynamical models are, therefore, vital areas for research and development. The limits imposed on ocean prediction by errors in the geoid or apparent errors associated with ground track variations near strong geopotential gradients are examined.

  18. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    NASA Astrophysics Data System (ADS)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  19. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors.

    PubMed

    Carlson, Joel N K; Park, Jong Min; Park, So-Yeon; Park, Jong In; Choi, Yunseok; Ye, Sung-Joon

    2016-03-21

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  20. Adaptive prediction of human eye pupil position and effects on wavefront errors

    NASA Astrophysics Data System (ADS)

    Garcia-Rissmann, Aurea; Kulcsár, Caroline; Raynaud, Henri-François; El Mrabet, Yamina; Sahin, Betul; Lamory, Barbara

    2011-03-01

    The effects of pupil motion on retinal imaging are studied in this paper. Involuntary eye or head movements are always present in the imaging procedure, decreasing the output quality and preventing a more detailed diagnostics. When the image acquisition is performed using an adaptive optics (AO) system, substantial gain is foreseen if pupil motion is accounted for. This can be achieved using a pupil tracker as the one developed by Imagine Eyes R®, which provides pupil position measurements at a 80Hz sampling rate. In any AO loop, there is inevitably a delay between the wavefront measurement and the correction applied to the deformable mirror, meaning that an optimal compensation requires prediction. We investigate several ways of predicting pupil movement, either by retaining the last value given by the pupil tracker, which is close to the optimal solution in the case of a pure random walk, or by performing position prediction thanks to auto-regressive (AR) models with parameters updated in real time. We show that a small improvement in prediction with respect to predicting with the latest measured value is obtained through adaptive AR modeling. We evaluate the wavefront errors obtained by computing the root mean square of the difference between a wavefront displaced by the assumed true position and the predicted one, as seen by the imaging system. The results confirm that pupil movements have to be compensated in order to minimize wavefront errors.

  1. Influence of Precision of Emission Characteristic Parameters on Model Prediction Error of VOCs/Formaldehyde from Dry Building Material

    PubMed Central

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C. PMID:24312497

  2. Dosimetric impact of geometric errors due to respiratory motion prediction on dynamic multileaf collimator-based four-dimensional radiation delivery

    SciTech Connect

    Vedam, S.; Docef, A.; Fix, M.; Murphy, M.; Keall, P.

    2005-06-15

    The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the

  3. Investigation and prediction of image placement errors in extreme ultraviolet lithography masks

    NASA Astrophysics Data System (ADS)

    Zheng, Liang

    2010-11-01

    According to the latest ITRS Roadmap, extreme ultraviolet lithography (EUVL) is expected to be one of the principal carriers for the IC production at sub-45 nm technology nodes. One of the most challenging tasks to fulfill EUVL is the fabrication of the EUVL mask in which the most important issue is the control of image placement errors. In this paper, the EUVL mask fabrication process was analyzed and image placement errors due to the fabrication process were investigated and predicted. A theoretical analysis was conducted to analytically benchmark the EUVL mask fabrication process. A line-and-space pattern (with pattern coverage of 50%) was employed in the theoretical analysis as an example. The theoretical deduction revealed that this 50% coverage pattern produces the same global response as a uniformly stressed thin film with half of the stress-thickness product of the patterned lines. Finite element (FE) models were established to simulate the EUVL mask fabrication process. In FE simulations, a new equivalent modeling technique was developed to predict the global distortions of the mask and the local distortions of the pattern features. Results indicate that for the EUVL mask with this line-and-space pattern (50% pattern coverage), the maximum image placement error is only about 10 nm, which is largely due to the application of a flat electrostatic chuck in both e-beam mounting and exposure chucking. Nonuniformities of either the mask or the electrostatic chuck will add to the final image placement errors of the EUVL mask.

  4. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  5. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  6. Reduced Striatal Responses to Reward Prediction Errors in Older Compared with Younger Adults

    PubMed Central

    Schuck, Nicolas W.; Nystrom, Leigh E.; Cohen, Jonathan D.

    2013-01-01

    We examined whether older adults differ from younger adults in how they learn from rewarding and aversive outcomes. Human participants were asked to either learn to choose actions that lead to monetary reward or learn to avoid actions that lead to monetary losses. To examine age differences in the neurophysiological mechanisms of learning, we applied a combination of computational modeling and fMRI. Behavioral results showed age-related impairments in learning from reward but not in learning from monetary losses. Consistent with these results, we observed age-related reductions in BOLD activity during learning from reward in the ventromedial PFC. Furthermore, the model-based fMRI analysis revealed a reduced responsivity of the ventral striatum to reward prediction errors during learning in older than younger adults. This age-related reduction in striatal sensitivity to reward prediction errors may result from a decline in phasic dopaminergic learning signals in the elderly. PMID:23761885

  7. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions

    NASA Astrophysics Data System (ADS)

    Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.

    2009-01-01

    SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.

  8. Irregular variations of Sea Level Anomaly data and their impact on prediction errors of these data

    NASA Astrophysics Data System (ADS)

    Zbylut-Górska, Maria; Kosek, Wieslaw; Niedzielski, Tomasz; Popiński, Waldemar; Wnęk, Agnieszka

    2015-04-01

    The movement of water around the oceans caused by density and a wind driven circulation plays a significant role in variation of sea surface heights which is now observed by satellite altimetry. Weekly SLA data thanks to courtesy of AVISO (Archiving, Validation and Interpretation of Satellite Oceanographic) service were analyzed to detect their irregular variations using the two time-frequency methods: Fourier Transform Band Pass Filter with Hilbert Transform (FTBPF+HT) and Complex demodulation with the Fourier Transform Low Pass Filter (CD+FTLPF). Using these two methods it is possible to compute time variable amplitudes and phases of oscillations as a function of geographic location. The global ocean maps of the standard deviations of amplitude differences and products of amplitude and phase differences for the annual oscillation and other shorter period oscillations with frequencies being an integer multiplicity of the annual frequency were computed to show the ocean areas with the greatest irregular variations. Such irregular amplitude and phase variations of the oscillations are the main causes of the SLA prediction errors. The predictions of the SLA time series were computed by a combination of the polynomial-harmonic model with the autoregressive prediction in the frame of PROGNOCEAN prediction service at the University of Wroclaw. The maps of these standard deviations are very similar to the maps of the mean prediction errors for a two weeks in the future of the SLA data. Thus, it's possible that the broadband annual oscillation is the main cause of the increase of the SLA data prediction errors.

  9. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  10. Updating memories--the role of prediction errors in memory reconsolidation.

    PubMed

    Exton-McGuinness, Marc T J; Lee, Jonathan L C; Reichelt, Amy C

    2015-02-01

    Memories are not static imprints of past experience, but rather are dynamic entities which enable us to predict outcomes of future situations and inform appropriate behaviours. In order to maintain the relevance of existing memories to our daily lives, memories can be updated with new information via a process of reconsolidation. In this review we describe recent experimental advances in the reconsolidation of both appetitive and aversive memory, and explore the neuronal mechanisms that underpin the conditions under which reconsolidation will occur. We propose that a prediction error signal, originating from dopaminergic midbrain neurons, is necessary for destabilisation and subsequent reconsolidation of a memory. PMID:25453746

  11. Belief about nicotine selectively modulates value and reward prediction error signals in smokers.

    PubMed

    Gu, Xiaosi; Lohrenz, Terry; Salas, Ramiro; Baldwin, Philip R; Soltani, Alireza; Kirk, Ulrich; Cinciripini, Paul M; Montague, P Read

    2015-02-24

    Little is known about how prior beliefs impact biophysically described processes in the presence of neuroactive drugs, which presents a profound challenge to the understanding of the mechanisms and treatments of addiction. We engineered smokers' prior beliefs about the presence of nicotine in a cigarette smoked before a functional magnetic resonance imaging session where subjects carried out a sequential choice task. Using a model-based approach, we show that smokers' beliefs about nicotine specifically modulated learning signals (value and reward prediction error) defined by a computational model of mesolimbic dopamine systems. Belief of "no nicotine in cigarette" (compared with "nicotine in cigarette") strongly diminished neural responses in the striatum to value and reward prediction errors and reduced the impact of both on smokers' choices. These effects of belief could not be explained by global changes in visual attention and were specific to value and reward prediction errors. Thus, by modulating the expression of computationally explicit signals important for valuation and choice, beliefs can override the physical presence of a potent neuroactive compound like nicotine. These selective effects of belief demonstrate that belief can modulate model-based parameters important for learning. The implications of these findings may be far ranging because belief-dependent effects on learning signals could impact a host of other behaviors in addiction as well as in other mental health problems. PMID:25605923

  12. Testosterone and reward prediction-errors in healthy men and men with schizophrenia.

    PubMed

    Morris, R W; Purves-Tyson, T D; Weickert, C Shannon; Rothmond, D; Lenroot, R; Weickert, T W

    2015-11-01

    Sex hormones impact reward processing, which is dysfunctional in schizophrenia; however, the degree to which testosterone levels relate to reward-related brain activity in healthy men and the extent to which this relationship may be altered in men with schizophrenia has not been determined. We used functional magnetic resonance imaging (fMRI) to measure neural responses in the striatum during reward prediction-errors and hormone assays to measure testosterone and prolactin in serum. To determine if testosterone can have a direct effect on dopamine neurons, we also localized and measured androgen receptors in human midbrain with immunohistochemistry and quantitative PCR. We found correlations between testosterone and prediction-error related activity in the ventral striatum of healthy men, but not in men with schizophrenia, such that testosterone increased the size of positive and negative prediction-error related activity in a valence-specific manner. We also identified midbrain dopamine neurons that were androgen receptor immunoreactive, and found that androgen receptor (AR) mRNA was positively correlated with tyrosine hydroxylase (TH) mRNA in human male substantia nigra. The results suggest that sex steroid receptors can potentially influence midbrain dopamine biosynthesis, and higher levels of serum testosterone are linked to better discrimination of motivationally-relevant signals in the ventral striatum, putatively by modulation of the dopamine biosynthesis pathway via AR ligand binding. However, the normal relationship between serum testosterone and ventral striatum activity during reward learning appears to be disrupted in schizophrenia. PMID:26232868

  13. Large area aggregation and mean-squared prediction error estimation for LACIE yield and production forecasts. [wheat

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)

    1979-01-01

    Aggregation formulas are given for production estimation of a crop type for a zone, a region, and a country, and methods for estimating yield prediction errors for the three areas are described. A procedure is included for obtaining a combined yield prediction and its mean-squared error estimate for a mixed wheat pseudozone.

  14. Early adversity disrupts the adult use of aversive prediction errors to reduce fear in uncertainty

    PubMed Central

    Wright, Kristina M.; DiLeo, Alyssa; McDannald, Michael A.

    2015-01-01

    Early life adversity increases anxiety in adult rodents and primates, and increases the risk for developing post-traumatic disorder (PTSD) in humans. We hypothesized that early adversity impairs the use of learning signals -negative, aversive prediction errors–to reduce fear in uncertainty. To test this hypothesis, we gave adolescent rats a battery of adverse experiences then assessed adult performance in probabilistic Pavlovian fear conditioning and fear extinction. Rats were confronted with three cues associated with different probabilities of foot shock: one cue never predicted shock, another cue predicted shock with uncertainty, and a final cue always predicted shock. Control rats initially acquired fear to all cues, but rapidly reduced fear to the non-predictive and uncertain cues. Early adversity rats were slower to reduce fear to the non-predictive cue and never fully reduced fear to the uncertain cue. In extinction, all cues were presented in the absence of shock. Fear to the uncertain cue in discrimination, but not early adversity itself, predicted the reduction of fear in extinction. These results demonstrate early adversity impairs the use of negative aversive prediction errors to reduce fear, especially in situations of uncertainty. PMID:26379520

  15. Effects of rapid eye movement sleep deprivation on fear extinction recall and prediction error signaling.

    PubMed

    Spoormaker, Victor I; Schröter, Manuel S; Andrade, Kátia C; Dresler, Martin; Kiem, Sara A; Goya-Maldonado, Roberto; Wetter, Thomas C; Holsboer, Florian; Sämann, Philipp G; Czisch, Michael

    2012-10-01

    In a temporal difference learning approach of classical conditioning, a theoretical error signal shifts from outcome deliverance to the onset of the conditioned stimulus. Omission of an expected outcome results in a negative prediction error signal, which is the initial step towards successful extinction and may therefore be relevant for fear extinction recall. As studies in rodents have observed a bidirectional relationship between fear extinction and rapid eye movement (REM) sleep, we aimed to test the hypothesis that REM sleep deprivation impairs recall of fear extinction through prediction error signaling in humans. In a three-day design with polysomnographically controlled REM sleep deprivation, 18 young, healthy subjects performed a fear conditioning, extinction and recall of extinction task with visual stimuli, and mild electrical shocks during combined functional magnetic resonance imaging (fMRI) and skin conductance response (SCR) measurements. Compared to the control group, the REM sleep deprivation group had increased SCR scores to a previously extinguished stimulus at early recall of extinction trials, which was associated with an altered fMRI time-course in the left middle temporal gyrus. Post-hoc contrasts corrected for measures of NREM sleep variability also revealed between-group differences primarily in the temporal lobe. Our results demonstrate altered prediction error signaling during recall of fear extinction after REM sleep deprivation, which may further our understanding of anxiety disorders in which disturbed sleep and impaired fear extinction learning coincide. Moreover, our findings are indicative of REM sleep related plasticity in regions that also show an increase in activity during REM sleep. PMID:21826762

  16. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  17. Putting reward in art: A tentative prediction error account of visual art

    PubMed Central

    Van de Cruys, Sander; Wagemans, Johan

    2011-01-01

    The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260

  18. Central difference predictive filter for attitude determination with low precision sensors and model errors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Chen, Xiaoqian; Misra, Arun K.

    2014-12-01

    Attitude determination is one of the key technologies for Attitude Determination and Control System (ADCS) of a satellite. However, serious model errors may exist which will affect the estimation accuracy of ACDS, especially for a small satellite with low precision sensors. In this paper, a central difference predictive filter (CDPF) is proposed for attitude determination of small satellites with model errors and low precision sensors. The new filter is proposed by introducing the Stirling's polynomial interpolation formula to extend the traditional predictive filter (PF). It is shown that the proposed filter has higher accuracy for the estimation of system states than the traditional PF. It is known that the unscented Kalman filter (UKF) has also been used in the ADCS of small satellites with low precision sensors. In order to evaluate the performance of the proposed filter, the UKF is also employed to compare it with the CDPF. Numerical simulations show that the proposed CDPF is more effective and robust in dealing with model errors and low precision sensors compared with the UKF or traditional PF.

  19. Error prediction of LiF-TLD used for gamma dose measurement for BNCT.

    PubMed

    Liu, H M; Liu, Y H

    2011-12-01

    To predict the neutron influence on various (6)LiF concentration in the LiF-TLD, the Monte Carlo code MCNP was adopted to simulate the energy deposition on a TLD chip with dimensions of 3.2×3.2×0.9 mm. By assuming that the TL response is proportional to the energy deposition on it, the percentage error of LiF-TLD used for gamma dose measurement in mixed (n, γ) fields can be written as: %Error=R(n)/R(g)×100%. Where R(n) and R(g) are the TL responses resulted from neutron and gamma, respectively. Taking the water phantom irradiated with the BNCT facility at the Tsing Hua Open-pool Reactor (THOR) as an example, the (6)LiF concentration for TLD-700 is 0.007%, the magnitude of the neutron flux is ~1×10(9) n/cm(2)/s, the neutron energy is ~4×10(-7) MeV (cadmium cut-off energy), the gamma dose rate is ~3 Gy/h, thus the percentage error can be predicted as 38%. PMID:21489808

  20. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  1. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations.

    PubMed

    Seoane, Fernando; Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar; Ward, Leigh C

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  2. Prior-predictive value from fast-growth simulations: Error analysis and bias estimation

    NASA Astrophysics Data System (ADS)

    Favaro, Alberto; Nickelsen, Daniel; Barykina, Elena; Engel, Andreas

    2015-01-01

    Variants of fluctuation theorems recently discovered in the statistical mechanics of nonequilibrium processes may be used for the efficient determination of high-dimensional integrals as typically occurring in Bayesian data analysis. In particular for multimodal distributions, Monte Carlo procedures not relying on perfect equilibration are advantageous. We provide a comprehensive statistical error analysis for the determination of the prior-predictive value (the evidence) in a Bayes problem, building on a variant of the Jarzynski equation. Special care is devoted to the characterization of the bias intrinsic to the method and statistical errors arising from exponential averages. We also discuss the determination of averages over multimodal posterior distributions with the help of a consequence of the Crooks relation. All our findings are verified by extensive numerical simulations of two model systems with bimodal likelihoods.

  3. Reassessing Domain Architecture Evolution of Metazoan Proteins: Major Impact of Gene Prediction Errors

    PubMed Central

    Nagy, Alinda; Szláma, György; Szarka, Eszter; Trexler, Mária; Bányai, László; Patthy, László

    2011-01-01

    In view of the fact that appearance of novel protein domain architectures (DA) is closely associated with biological innovations, there is a growing interest in the genome-scale reconstruction of the evolutionary history of the domain architectures of multidomain proteins. In such analyses, however, it is usually ignored that a significant proportion of Metazoan sequences analyzed is mispredicted and that this may seriously affect the validity of the conclusions. To estimate the contribution of errors in gene prediction to differences in DA of predicted proteins, we have used the high quality manually curated UniProtKB/Swiss-Prot database as a reference. For genome-scale analysis of domain architectures of predicted proteins we focused on RefSeq, EnsEMBL and NCBI's GNOMON predicted sequences of Metazoan species with completely sequenced genomes. Comparison of the DA of UniProtKB/Swiss-Prot sequences of worm, fly, zebrafish, frog, chick, mouse, rat and orangutan with those of human Swiss-Prot entries have identified relatively few cases where orthologs had different DA, although the percentage with different DA increased with evolutionary distance. In contrast with this, comparison of the DA of human, orangutan, rat, mouse, chicken, frog, zebrafish, worm and fly RefSeq, EnsEMBL and NCBI's GNOMON predicted protein sequences with those of the corresponding/orthologous human Swiss-Prot entries identified a significantly higher proportion of domain architecture differences than in the case of the comparison of Swiss-Prot entries. Analysis of RefSeq, EnsEMBL and NCBI's GNOMON predicted protein sequences with DAs different from those of their Swiss-Prot orthologs confirmed that the higher rate of domain architecture differences is due to errors in gene prediction, the majority of which could be corrected with our FixPred protocol. We have also demonstrated that contamination of databases with incomplete, abnormal or mispredicted sequences introduces a bias in DA

  4. Reassessing domain architecture evolution of metazoan proteins: major impact of gene prediction errors.

    PubMed

    Nagy, Alinda; Szláma, György; Szarka, Eszter; Trexler, Mária; Bányai, László; Patthy, László

    2011-01-01

    In view of the fact that appearance of novel protein domain architectures (DA) is closely associated with biological innovations, there is a growing interest in the genome-scale reconstruction of the evolutionary history of the domain architectures of multidomain proteins. In such analyses, however, it is usually ignored that a significant proportion of Metazoan sequences analyzed is mispredicted and that this may seriously affect the validity of the conclusions. To estimate the contribution of errors in gene prediction to differences in DA of predicted proteins, we have used the high quality manually curated UniProtKB/Swiss-Prot database as a reference. For genome-scale analysis of domain architectures of predicted proteins we focused on RefSeq, EnsEMBL and NCBI's GNOMON predicted sequences of Metazoan species with completely sequenced genomes. Comparison of the DA of UniProtKB/Swiss-Prot sequences of worm, fly, zebrafish, frog, chick, mouse, rat and orangutan with those of human Swiss-Prot entries have identified relatively few cases where orthologs had different DA, although the percentage with different DA increased with evolutionary distance. In contrast with this, comparison of the DA of human, orangutan, rat, mouse, chicken, frog, zebrafish, worm and fly RefSeq, EnsEMBL and NCBI's GNOMON predicted protein sequences with those of the corresponding/orthologous human Swiss-Prot entries identified a significantly higher proportion of domain architecture differences than in the case of the comparison of Swiss-Prot entries. Analysis of RefSeq, EnsEMBL and NCBI's GNOMON predicted protein sequences with DAs different from those of their Swiss-Prot orthologs confirmed that the higher rate of domain architecture differences is due to errors in gene prediction, the majority of which could be corrected with our FixPred protocol. We have also demonstrated that contamination of databases with incomplete, abnormal or mispredicted sequences introduces a bias in DA

  5. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  6. Individual differences and the neural representations of reward expectation and reward prediction error

    PubMed Central

    2007-01-01

    Reward expectation and reward prediction errors are thought to be critical for dynamic adjustments in decision-making and reward-seeking behavior, but little is known about their representation in the brain during uncertainty and risk-taking. Furthermore, little is known about what role individual differences might play in such reinforcement processes. In this study, it is shown behavioral and neural responses during a decision-making task can be characterized by a computational reinforcement learning model and that individual differences in learning parameters in the model are critical for elucidating these processes. In the fMRI experiment, subjects chose between high- and low-risk rewards. A computational reinforcement learning model computed expected values and prediction errors that each subject might experience on each trial. These outputs predicted subjects’ trial-to-trial choice strategies and neural activity in several limbic and prefrontal regions during the task. Individual differences in estimated reinforcement learning parameters proved critical for characterizing these processes, because models that incorporated individual learning parameters explained significantly more variance in the fMRI data than did a model using fixed learning parameters. These findings suggest that the brain engages a reinforcement learning process during risk-taking and that individual differences play a crucial role in modeling this process. PMID:17710118

  7. Hierarchy of prediction errors for auditory events in human temporal and frontal cortex.

    PubMed

    Dürschmid, Stefan; Edwards, Erik; Reichert, Christoph; Dewar, Callum; Hinrichs, Hermann; Heinze, Hans-Jochen; Kirsch, Heidi E; Dalal, Sarang S; Deouell, Leon Y; Knight, Robert T

    2016-06-14

    Predictive coding theories posit that neural networks learn statistical regularities in the environment for comparison with actual outcomes, signaling a prediction error (PE) when sensory deviation occurs. PE studies in audition have capitalized on low-frequency event-related potentials (LF-ERPs), such as the mismatch negativity. However, local cortical activity is well-indexed by higher-frequency bands [high-γ band (Hγ): 80-150 Hz]. We compared patterns of human Hγ and LF-ERPs in deviance detection using electrocorticographic recordings from subdural electrodes over frontal and temporal cortices. Patients listened to trains of task-irrelevant tones in two conditions differing in the predictability of a deviation from repetitive background stimuli (fully predictable vs. unpredictable deviants). We found deviance-related responses in both frequency bands over lateral temporal and inferior frontal cortex, with an earlier latency for Hγ than for LF-ERPs. Critically, frontal Hγ activity but not LF-ERPs discriminated between fully predictable and unpredictable changes, with frontal cortex sensitive to unpredictable events. The results highlight the role of frontal cortex and Hγ activity in deviance detection and PE generation. PMID:27247381

  8. Ab initio based thermal property predictions at a low cost: An error analysis

    NASA Astrophysics Data System (ADS)

    Lejaeghere, Kurt; Jaeken, Jan; Van Speybroeck, Veronique; Cottenier, Stefaan

    2014-01-01

    Ab initio calculations often do not straightforwardly yield the thermal properties of a material yet. It requires considerable computational efforts, for example, to predict the volumetric thermal expansion coefficient αV or the melting temperature Tm from first principles. An alternative is to use semiempirical approaches. They relate the experimental values to first-principles predictors via fits or approximative models. Before applying such methods, however, it is of paramount importance to be aware of the expected errors. We therefore quantify these errors at the density-functional theory level using the Perdew-Burke-Ernzerhof functional for several semiempirical approximations of αV and Tm, and compare them to the errors from fully ab initio methods, which are computationally more intensive. We base our conclusions on a benchmark set of 71 ground-state elemental crystals. For the thermal expansion coefficient, it appears that simple quasiharmonic theory, in combination with different approximations to the Grüneisen parameter, provides a similar overall accuracy as exhaustive first-principles phonon calculations. For the melting temperature, expensive ab initio molecular-dynamics simulations still outperform semiempirical methods.

  9. Valence-separated representation of reward prediction error in feedback-related negativity and positivity.

    PubMed

    Bai, Yu; Katahira, Kentaro; Ohira, Hideki

    2015-02-11

    Feedback-related negativity (FRN) is an event-related brain potential (ERP) component elicited by errors and negative outcomes. Previous studies proposed that FRN reflects the activity of a general error-processing system that incorporates reward prediction error (RPE). However, other studies reported inconsistent results on this issue - namely, that FRN only reflects the valence of feedback and that the magnitude of RPE is reflected by the other ERP component called P300. The present study focused on the relationship between the FRN amplitude and RPE. ERPs were recorded during a reversal learning task performed by the participants, and a computational model was used to estimate trial-by-trial RPEs, which we correlated with the ERPs. The results indicated that FRN and P300 reflected the magnitude of RPE in negative outcomes and positive outcomes, respectively. In addition, the correlation between RPE and the P300 amplitude was stronger than the correlation between RPE and the FRN amplitude. These differences in the correlation between ERP and RPE components may explain the inconsistent results reported by previous studies; the asymmetry in the correlations might make it difficult to detect the effect of the RPE magnitude on the FRN and makes it appear that the FRN only reflects the valence of feedback. PMID:25634316

  10. Climbing fibers encode a temporal-difference prediction error during cerebellar learning in mice

    PubMed Central

    Ohmae, Shogo; Medina, Javier F.

    2016-01-01

    Climbing fiber inputs to Purkinje cells are thought to play a teaching role by generating the instructive signals that drive cerebellar learning. To investigate how these instructive signals are encoded, we recorded the activity of individual climbing fibers during cerebellar-dependent eyeblink conditioning in mice. Our findings show that climbing fibers signal both the unexpected delivery and the unexpected omission of the periocular airpuff that serves as the instructive signal for eyeblink conditioning. In addition, we report the surprising discovery that climbing fibers activated by periocular airpuffs also respond to stimuli from other sensory modalities, if those stimuli are novel or if they predict that the periocular airpuff is about to be presented. This pattern of climbing fiber activity is strikingly similar to the responses of dopamine neurons during reinforcement learning, which have been shown to encode a particular type of instructive signal known as a temporal difference prediction error. PMID:26551541

  11. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    NASA Technical Reports Server (NTRS)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  12. A Method for Selecting between Fisher's Linear Classification Functions and Least Absolute Deviation in Predictive Discriminant Analysis.

    ERIC Educational Resources Information Center

    Meshbane, Alice; Morris, John D.

    A method for comparing the cross-validated classification accuracy of Fisher's linear classification functions (FLCFs) and the least absolute deviation is presented under varying data conditions for the two-group classification problem. With this method, separate-group as well as total-sample proportions of current classifications can be compared…

  13. Greater externalizing personality traits predict less error-related insula and anterior cingulate cortex activity in acutely abstinent cigarette smokers

    PubMed Central

    Carroll, Allison J.; Sutherland, Matthew T.; Salmeron, Betty Jo; Ross, Thomas J.; Stein, Elliot A.

    2014-01-01

    Attenuated activity in performance-monitoring brain regions following erroneous actions may contribute to the repetition of maladaptive behaviors such as continued drug use. Externalizing is a broad personality construct characterized by deficient impulse control, vulnerability to addiction, and reduced neurobiological indices of error processing. The insula and dorsal anterior cingulate cortex (dACC) are regions critically linked with error processing as well as the perpetuation of cigarette smoking. As such, we examined the interrelations between externalizing tendencies, erroneous task performance, and error-related insula and dACC activity in overnight-deprived smokers (n=24) and nonsmokers (n=20). Participants completed a self-report measure assessing externalizing tendencies (Externalizing Spectrum Inventory) and a speeded Flanker task during fMRI scanning. We observed that higher externalizing tendencies correlated with the occurrence of more performance errors among smokers but not nonsmokers. Suggesting a neurobiological contribution to such sub-optimal performance among smokers, higher externalizing also predicted less recruitment of the right insula and dACC following error commission. Critically, this error-related activity fully mediated the relationship between externalizing traits and error rates. That is, higher externalizing scores predicted less error-related right insula and dACC activity and, in turn, less error-related activity predicted more errors. Relating such regional activity with a clinically-relevant construct, less error-related right insula and dACC responses correlated with higher tobacco craving during abstinence. Given that inadequate error-related neuronal responses may contribute to continued drug use despite negative consequences, these results suggest that externalizing tendencies and/or compromised error processing among subsets of smokers may be relevant factors for smoking cessation success. PMID:24354662

  14. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    NASA Astrophysics Data System (ADS)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  15. Electrophysiological correlates of self-specific prediction errors in the human brain.

    PubMed

    Sel, Alejandra; Harding, Rachel; Tsakiris, Manos

    2016-01-15

    Recognising one's self, vs. others, is a key component of self-awareness, crucial for social interactions. Here we investigated whether processing self-face and self-body images can be explained by the brain's prediction of sensory events, based on regularities in the given context. We measured evoked cortical responses while participants observed alternating sequences of self-face or other-face images (experiment 1) and self-body or other-body images (experiment 2), which were embedded in an identity-irrelevant task. In experiment 1, the expected sequences were violated by deviant morphed images, which contained 33%, 66% or 100% of the self-face when the other's face was expected (and vice versa). In experiment 2, the anticipated sequences were violated by deviant images of the self when the other's image was expected (and vice versa), or by two deviant images composed of pictures of the self-face attached to the other's body, or the other's face attached to the self-body. This manipulation allowed control of the prediction error associated with the self or the other's image. Deviant self-images (but not deviant images of the other) elicited a visual mismatch response (vMMR)--a cortical index of violations of regularity. This was source localised to face and body related visual, sensorimotor and limbic areas and had amplitude proportional to the amount of deviance from the self-image. We provide novel evidence that self-processing can be described by the brain's prediction error system, which accounts for self-bias in visual processing. These findings are discussed in the light of recent predictive coding models of self-processing. PMID:26455899

  16. Unsigned value prediction-error modulates the motor system in absence of choice.

    PubMed

    Vassena, Eliana; Cobbaert, Stephanie; Andres, Michael; Fias, Wim; Verguts, Tom

    2015-11-15

    Human actions are driven by the pursuit of goals, especially when achieving these goals entails a reward. Accordingly, recent work showed that anticipating a reward in a motor task influences the motor system, boosting motor excitability and increasing overall readiness. Attaining a reward typically requires some mental or physical effort. Recent neuroimaging evidence suggested that both reward expectation and effort requirements are encoded by a partially overlapping brain network. Moreover, reward and effort information are combined in an integrative value signal. However, whether and how mental effort is integrated with reward at the motor level during task preparation remains unclear. To address these issues, we implemented a mental effort task where reward expectation and effort requirements were manipulated. During task preparation, TMS was delivered on the motor cortex and motor-evoked potentials (MEPs) were recorded on the right hand muscles to probe motor excitability. The results showed an interaction of effort and reward in modulating the motor system, reflecting an unsigned value prediction-error signal. Crucially, this was observed in the motor system in absence of a value-based decision or value-driven action selection. This suggests a high-level cognitive factor such as unsigned value prediction-error can modulate the motor system. Interestingly, effort-related motor excitability was also modulated by individual differences in tendency to engage in (and enjoy) mental effort, as measured by the Need for Cognition questionnaire, underlining a role of subjective effort experience in value-driven preparation for action. PMID:26254588

  17. Delusions and prediction error: clarifying the roles of behavioural and brain responses

    PubMed Central

    Corlett, Philip Robert; Fletcher, Paul Charles

    2015-01-01

    Griffiths and colleagues provided a clear and thoughtful review of the prediction error model of delusion formation [Cognitive Neuropsychiatry, 2014 April 4 (Epub ahead of print)]. As well as reviewing the central ideas and concluding that the existing evidence base is broadly supportive of the model, they provide a detailed critique of some of the experiments that we have performed to study it. Though they conclude that the shortcomings that they identify in these experiments do not fundamentally challenge the prediction error model, we nevertheless respond to these criticisms. We begin by providing a more detailed outline of the model itself as there are certain important aspects of it that were not covered in their review. We then respond to their specific criticisms of the empirical evidence. We defend the neuroimaging contrasts that we used to explore this model of psychosis arguing that, while any single contrast entails some ambiguity, our assumptions have been justified by our extensive background work before and since. PMID:25559871

  18. Episodic Memory Encoding Interferes with Reward Learning and Decreases Striatal Prediction Errors

    PubMed Central

    Braun, Erin Kendall; Daw, Nathaniel D.

    2014-01-01

    Learning is essential for adaptive decision making. The striatum and its dopaminergic inputs are known to support incremental reward-based learning, while the hippocampus is known to support encoding of single events (episodic memory). Although traditionally studied separately, in even simple experiences, these two types of learning are likely to co-occur and may interact. Here we sought to understand the nature of this interaction by examining how incremental reward learning is related to concurrent episodic memory encoding. During the experiment, human participants made choices between two options (colored squares), each associated with a drifting probability of reward, with the goal of earning as much money as possible. Incidental, trial-unique object pictures, unrelated to the choice, were overlaid on each option. The next day, participants were given a surprise memory test for these pictures. We found that better episodic memory was related to a decreased influence of recent reward experience on choice, both within and across participants. fMRI analyses further revealed that during learning the canonical striatal reward prediction error signal was significantly weaker when episodic memory was stronger. This decrease in reward prediction error signals in the striatum was associated with enhanced functional connectivity between the hippocampus and striatum at the time of choice. Our results suggest a mechanism by which memory encoding may compete for striatal processing and provide insight into how interactions between different forms of learning guide reward-based decision making. PMID:25378157

  19. Analyzing the prediction error of large scale Vis-NIR spectroscopic models

    NASA Astrophysics Data System (ADS)

    Stevens, Antoine; Nocita, Marco; Montanarella, Luca; van Wesemael, Bas

    2013-04-01

    Based on the LUCAS soil spectral library (~ 20,000 samples distributed over 23 EU countries), we developed multivariate calibration models (model trees) for estimating the SOC content from the visible and near infrared reflectance (Vis-NIR) spectra. The root mean square error of validation of these models ranged from 4 to 15 g C kg-1. The prediction accuracy is usually negatively related to samples heterogeneity in a given library, so that large scale databases typically demonstrate low prediction accuracy compared to local scale studies. This is inherent to the empirical nature of the approach that cannot accommodate well the changing and scale-dependent relationship between Vis-NIR spectra and soil properties. In our study, we analyzed the effect of key soil properties and environmental covariates (land cover) on the SOC prediction accuracy of the spectroscopic models. It is shown that mineralogy as well as soil texture have large impacts on prediction accuracy and that pedogenetic factors that are easily obtainable if the samples are geo-referenced can be used as input in the spectroscopic models to improve model accuracies.

  20. Influence of the prediction error of the first eye undergoing cataract surgery on the refractive outcome of the fellow eye

    PubMed Central

    Gorodezky, Ludmilla; Mazinani, Babac AE; Plange, Niklas; Walter, Peter; Wenzel, Martin; Roessler, Gernot

    2014-01-01

    Introduction In addition to measurement errors, individual anatomical conditions could be made responsible for unexpected prediction errors in the determination of the correct intraocular lens power for cataract surgery. Obviously, such anatomical conditions might be relevant for both eyes. The purpose of this study was to evaluate whether the postoperative refractive error of the first eye has to be taken in account for the biometry of the second. Methods In this retrospective study, we included 670 eyes of 335 patients who underwent phacoemulsification and implantation of a foldable intraocular lens in both eyes. According to the SRK/T formula, the postoperative refractive error of each eye was determined and compared with its fellow eye. Results Of 670 eyes, 622 showed a postoperative refractive error within ±1.0 D (93%), whereas the prediction error was 0.5 D or less in 491 eyes (73%). The postoperative difference between both eyes was within 0.5 D in 71% and within 1.0 D in 93% of the eyes. Comparing the prediction error of an eye and its fellow eye, the error of the fellow eye was about half the value of the other. Conclusion Our results imply that substitution of half of the prediction error of the first eye into the calculation of the second eye may be useful to reduce the prediction error in the second eye. However, prospective studies should be initiated to demonstrate an improved accuracy for the second eye’s intraocular lens power calculation by partial adjustment. PMID:25382967

  1. A two-dimensional matrix correction for off-axis portal dose prediction errors

    SciTech Connect

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in

  2. Prediction error and somatosensory insula activation in women recovered from anorexia nervosa

    PubMed Central

    Frank, Guido K.W.; Collier, Shaleise; Shott, Megan E.; O’Reilly, Randall C.

    2016-01-01

    Background Previous research in patients with anorexia nervosa showed heightened brain response during a taste reward conditioning task and heightened sensitivity to rewarding and punishing stimuli. Here we tested the hypothesis that individuals recovered from anorexia nervosa would also experience greater brain activation during this task as well as higher sensitivity to salient stimuli than controls. Methods Women recovered from restricting-type anorexia nervosa and healthy control women underwent fMRI during application of a prediction error taste reward learning paradigm. Results Twenty-four women recovered from anorexia nervosa (mean age 30.3 ± 8.1 yr) and 24 control women (mean age 27.4 ± 6.3 yr) took part in this study. The recovered anorexia nervosa group showed greater left posterior insula activation for the prediction error model analysis than the control group (family-wise error– and small volume–corrected p < 0.05). A group × condition analysis found greater posterior insula response in women recovered from anorexia nervosa than controls for unexpected stimulus omission, but not for unexpected receipt. Sensitivity to punishment was elevated in women recovered from anorexia nervosa. Limitations This was a cross-sectional study, and the sample size was modest. Conclusion Anorexia nervosa after recovery is associated with heightened prediction error–related brain response in the posterior insula as well as greater response to unexpected reward stimulus omission. This finding, together with behaviourally increased sensitivity to punishment, could indicate that individuals recovered from anorexia nervosa are particularly responsive to punishment. The posterior insula processes somatosensory stimuli, including unexpected bodily states, and greater response could indicate altered perception or integration of unexpected or maybe unwanted bodily feelings. Whether those findings develop during the ill state or whether they are biological traits requires

  3. Testing alternative uses of electromagnetic data to reduce the prediction error of groundwater models

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Christensen, Steen; Ferre, Ty Paul A.

    2016-05-01

    In spite of geophysics being used increasingly, it is often unclear how and when the integration of geophysical data and models can best improve the construction and predictive capability of groundwater models. This paper uses a newly developed HYdrogeophysical TEst-Bench (HYTEB) that is a collection of geological, groundwater and geophysical modeling and inversion software to demonstrate alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity (clay). The synthetic 3-D reference system is designed so that there is a perfect relationship between hydraulic conductivity and electrical resistivity. For this system it is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by (in most cases) geophysics-based regularization. For the studied system and inversion approaches it is found that resistivities estimated by sequential hydrogeophysical inversion (SHI) or joint hydrogeophysical inversion (JHI) should be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. The limited groundwater model improvement obtained by using the geophysical data probably mainly arises from the way these data are used here: the alternative inversion approaches propagate geophysical estimation errors into the hydrologic model parameters. It was expected that JHI would compensate for this, but the hydrologic data were apparently insufficient to secure such compensation. With respect to reducing model prediction error, it depends on the type

  4. Harsh Parenting and Fearfulness in Toddlerhood Interact to Predict Amplitudes of Preschool Error-Related Negativity

    PubMed Central

    Brooker, Rebecca J.; Buss, Kristin A.

    2014-01-01

    Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children’s risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN), an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems. PMID:24721466

  5. Triangle network motifs predict complexes by complementing high-error interactomes with structural information

    PubMed Central

    Andreopoulos, Bill; Winter, Christof; Labudde, Dirk; Schroeder, Michael

    2009-01-01

    Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs) with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs) were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs) representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS). PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that relatively little structural

  6. Predicting the geographic distribution of a species from presence-only data subject to detection errors

    USGS Publications Warehouse

    Dorazio, Robert M.

    2012-01-01

    Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.

  7. Background error statistics for aerosol variables from WRF/Chem predictions in Southern California

    NASA Astrophysics Data System (ADS)

    Zang, Zengliang; Hao, Zilong; Pan, Xiaobin; Li, Zhijin; Chen, Dan; Zhang, Li; Li, Qinbin

    2015-05-01

    Background error covariance (BEC) is crucial in data assimilation. This paper addresses the multivariate BEC associated with black carbon, organic carbon, nitrates, sulfates, and other constituents of aerosol species. These aerosol species are modeled and predicted using the Model for Simulating Aerosol Interactions and Chemistry scheme (MOSAIC) in the Weather Research and Forecasting/Chemistry (WRF/Chem) model at a resolution of 4 km in Southern California. The BEC is estimated from the differences between the 36-hour and 12-hour forecasts using the NMC method. The results indicated that the maximum background error standard deviation is associated with nitrate and is larger than that of black carbon, organic carbon, and sulfate. The horizontal and vertical scale of the correlation of nitrate is much smaller than that of other species. A significant cross-correlation is found between the species of black carbon and organic carbon. The cross-correlations between nitrate and other variables are relatively smaller and exhibit a relatively smaller length scale. Single observation data assimilation experiments are performed to illustrate the effect of the BEC on analysis increments.

  8. Discrete coding of stimulus value, reward expectation, and reward prediction error in the dorsal striatum.

    PubMed

    Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro

    2015-11-01

    To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201

  9. The modulation of savouring by prediction error and its effects on choice.

    PubMed

    Iigaya, Kiyohito; Story, Giles W; Kurth-Nelson, Zeb; Dolan, Raymond J; Dayan, Peter

    2016-01-01

    When people anticipate uncertain future outcomes, they often prefer to know their fate in advance. Inspired by an idea in behavioral economics that the anticipation of rewards is itself attractive, we hypothesized that this preference of advance information arises because reward prediction errors carried by such information can boost the level of anticipation. We designed new empirical behavioral studies to test this proposal, and confirmed that subjects preferred advance reward information more strongly when they had to wait for rewards for a longer time. We formulated our proposal in a reinforcement-learning model, and we showed that our model could account for a wide range of existing neuronal and behavioral data, without appealing to ambiguous notions such as an explicit value for information. We suggest that such boosted anticipation significantly drives risk-seeking behaviors, most pertinently in gambling. PMID:27101365

  10. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    SciTech Connect

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

  11. The modulation of savouring by prediction error and its effects on choice

    PubMed Central

    Iigaya, Kiyohito; Story, Giles W; Kurth-Nelson, Zeb; Dolan, Raymond J; Dayan, Peter

    2016-01-01

    When people anticipate uncertain future outcomes, they often prefer to know their fate in advance. Inspired by an idea in behavioral economics that the anticipation of rewards is itself attractive, we hypothesized that this preference of advance information arises because reward prediction errors carried by such information can boost the level of anticipation. We designed new empirical behavioral studies to test this proposal, and confirmed that subjects preferred advance reward information more strongly when they had to wait for rewards for a longer time. We formulated our proposal in a reinforcement-learning model, and we showed that our model could account for a wide range of existing neuronal and behavioral data, without appealing to ambiguous notions such as an explicit value for information. We suggest that such boosted anticipation significantly drives risk-seeking behaviors, most pertinently in gambling. DOI: http://dx.doi.org/10.7554/eLife.13747.001 PMID:27101365

  12. Observing others stay or switch - How social prediction errors are integrated into reward reversal learning.

    PubMed

    Ihssen, Niklas; Mussweiler, Thomas; Linden, David E J

    2016-08-01

    Reward properties of stimuli can undergo sudden changes, and the detection of these 'reversals' is often made difficult by the probabilistic nature of rewards/punishments. Here we tested whether and how humans use social information (someone else's choices) to overcome uncertainty during reversal learning. We show a substantial social influence during reversal learning, which was modulated by the type of observed behavior. Participants frequently followed observed conservative choices (no switches after punishment) made by the (fictitious) other player but ignored impulsive choices (switches), even though the experiment was set up so that both types of response behavior would be similarly beneficial/detrimental (Study 1). Computational modeling showed that participants integrated the observed choices as a 'social prediction error' instead of ignoring or blindly following the other player. Modeling also confirmed higher learning rates for 'conservative' versus 'impulsive' social prediction errors. Importantly, this 'conservative bias' was boosted by interpersonal similarity, which in conjunction with the lack of effects observed in a non-social control experiment (Study 2) confirmed its social nature. A third study suggested that relative weighting of observed impulsive responses increased with increased volatility (frequency of reversals). Finally, simulations showed that in the present paradigm integrating social and reward information was not necessarily more adaptive to maximize earnings than learning from reward alone. Moreover, integrating social information increased accuracy only when conservative and impulsive choices were weighted similarly during learning. These findings suggest that to guide decisions in choice contexts that involve reward reversals humans utilize social cues conforming with their preconceptions more strongly than cues conflicting with them, especially when the other is similar. PMID:27128170

  13. Quantifying uncertainty for predictions with model error in non-Gaussian systems with intermittency

    NASA Astrophysics Data System (ADS)

    Branicki, Michal; Majda, Andrew J.

    2012-09-01

    This paper discusses a range of important mathematical issues arising in applications of a newly emerging stochastic-statistical framework for quantifying and mitigating uncertainties associated with prediction of partially observed and imperfectly modelled complex turbulent dynamical systems. The need for such a framework is particularly severe in climate science where the true climate system is vastly more complicated than any conceivable model; however, applications in other areas, such as neural networks and materials science, are just as important. The mathematical tools employed here rely on empirical information theory and fluctuation-dissipation theorems (FDTs) and it is shown that they seamlessly combine into a concise systematic framework for measuring and optimizing consistency and sensitivity of imperfect models. Here, we utilize a simple statistically exactly solvable ‘perfect’ system with intermittent hidden instabilities and with time-periodic features to address a number of important issues encountered in prediction of much more complex dynamical systems. These problems include the role and mitigation of model error due to coarse-graining, moment closure approximations, and the memory of initial conditions in producing short, medium and long-range predictions. Importantly, based on a suite of increasingly complex imperfect models of the perfect test system, we show that the predictive skill of the imperfect models and their sensitivity to external perturbations is improved by ensuring their consistency on the statistical attractor (i.e. the climate) with the perfect system. Furthermore, the discussed link between climate fidelity and sensitivity via the FDT opens up an enticing prospect of developing techniques for improving imperfect model sensitivity based on specific tests carried out in the training phase of the unperturbed statistical equilibrium/climate.

  14. The fate of memory: Reconsolidation and the case of Prediction Error.

    PubMed

    Fernández, Rodrigo S; Boccia, Mariano M; Pedreira, María E

    2016-09-01

    The ability to make predictions based on stored information is a general coding strategy. A Prediction-Error (PE) is a mismatch between expected and current events. It was proposed as the process by which memories are acquired. But, our memories like ourselves are subject to change. Thus, an acquired memory can become active and update its content or strength by a labilization-reconsolidation process. Within the reconsolidation framework, PE drives the updating of consolidated memories. Moreover, memory features, such as strength and age, are crucial boundary conditions that limit the initiation of the reconsolidation process. In order to disentangle these boundary conditions, we review the role of surprise, classical models of conditioning, and their neural correlates. Several forms of PE were found to be capable of inducing memory labilization-reconsolidation. Notably, many of the PE findings mirror those of memory-reconsolidation, suggesting a strong link between these signals and memory process. Altogether, the aim of the present work is to integrate a psychological and neuroscientific analysis of PE into a general framework for memory-reconsolidation. PMID:27287939

  15. Speech intelligibility index predictions for young and old listeners in automobile noise: Can the index be improved by incorporating factors other than absolute threshold?

    NASA Astrophysics Data System (ADS)

    Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don

    2003-10-01

    While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.

  16. Absolute Zero

    NASA Astrophysics Data System (ADS)

    Donnelly, Russell J.; Sheibley, D.; Belloni, M.; Stamper-Kurn, D.; Vinen, W. F.

    2006-12-01

    Absolute Zero is a two hour PBS special attempting to bring to the general public some of the advances made in 400 years of thermodynamics. It is based on the book “Absolute Zero and the Conquest of Cold” by Tom Shachtman. Absolute Zero will call long-overdue attention to the remarkable strides that have been made in low-temperature physics, a field that has produced 27 Nobel Prizes. It will explore the ongoing interplay between science and technology through historical examples including refrigerators, ice machines, frozen foods, liquid oxygen and nitrogen as well as much colder fluids such as liquid hydrogen and liquid helium. A website has been established to promote the series: www.absolutezerocampaign.org. It contains information on the series, aimed primarily at students at the middle school level. There is a wealth of material here and we hope interested teachers will draw their student’s attention to this website and its substantial contents, which have been carefully vetted for accuracy.

  17. Neural correlates of sensory prediction errors in monkeys: evidence for internal models of voluntary self-motion in the cerebellum.

    PubMed

    Cullen, Kathleen E; Brooks, Jessica X

    2015-02-01

    During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain's motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased--consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new

  18. Revised Absolute Configuration of Sibiricumin A: Substituent Effects in Simplified Model Structures Used for Quantum Mechanical Predictions of Chiroptical Properties.

    PubMed

    Zhao, Dan; Li, Zheng-Qiu; Cao, Fei; Liang, Miao-Miao; Pittman, Charles U; Zhu, Hua-Jie; Li, Li; Yu, Shi-Shan

    2016-08-01

    This study discusses the choice of different simplified models used in computations of electronic circular dichroism (ECD) spectra and other chiroptical characteristics used to determine the absolute configuration (AC) of the complex natural product sibiricumin A. Sections of molecules containing one chiral center with one near an aromatic group have large effects on the ECD spectra. Conversely, when the phenyl group is present on a substituent without a nonstereogenic center, removal of this section will have little effect on ECD spectra. However, these nonstereogenic-center-containing sections have large effects on calculated optical rotations (OR) values since the OR value is more sensitive to the geometries of sections in a molecule. In this study, the wrong AC of sibiricumin A was reassigned as (7R,8S,1'R,7'R,8'S)-. Chirality 28:612-617, 2016. © 2016 Wiley Periodicals, Inc. PMID:27428019

  19. EFFECT OF MEASUREMENT ERRORS ON PREDICTED COSMOLOGICAL CONSTRAINTS FROM SHEAR PEAK STATISTICS WITH LARGE SYNOPTIC SURVEY TELESCOPE

    SciTech Connect

    Bard, D.; Chang, C.; Kahn, S. M.; Gilmore, K.; Marshall, S.; Kratochvil, J. M.; Huffenberger, K. M.; May, M.; AlSayyad, Y.; Connolly, A.; Gibson, R. R.; Jones, L.; Krughoff, S.; Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Lorenz, S.; Haiman, Z.; Jernigan, J. G.; and others

    2013-09-01

    We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST Image Simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  20. Nonlinear forcing singular vector -type tendency errors of the Zebiak-Cane model and its effect on ENSO predictability

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2014-05-01

    Within the framework of the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to explore the constant tendency error that has the largest effect on prediction uncertainties for El Niño events. The results showed only one NFSV to exist for each of the predictions for the predetermined model El Niño events. It was found that the NFSVs often present large-scale zonal dipolar structures and are insensitive to the intensities of El Niño events, but are dependent on the prediction period. In particular, the NFSVs associated with the predictions crossing through the growth phase of El Niño tend to exhibit a zonal dipolar pattern with positive anomalies in the equatorial central-western Pacific and negative anomalies in the equatorial eastern Pacific (denoted as "type-1 NFSVs"). Meanwhile, those associated with the predictions through the decaying phase of El Niño are inclined to present another zonal dipolar pattern (denoted as "type-2 NFSVs"), which is almost opposite to the type-1 NFSVs. The FSVs, i.e. the linear counterpart of the NFSVs, can also be classified into two types, which are of almost the same signs as in type-1 NFSVs and type-2 NFSVs, and which we similarly denoted as "type-1 FSVs" and "type-2 FSVs", respectively. We found that both type-1 FSVs and type-1 NFSVs often cause negative prediction errors for Niño-3 SSTA of the El Niño events, while the type-2 FSVs and type-2 NFSVs usually yield positive prediction errors. However, due to the effect of nonlinearities, the NFSVs usually have the western pole of the zonal dipolar pattern much farther west, and covering much broader region. Correspondingly, the NFSVs cause much larger prediction errors than the FSVs and show themselves to be much more applicable in describing the optimal tendency errors in the nonlinear Zebiak-Cane model. Our results also show that the nonlinearities have a suppression effect on the growth of the prediction errors caused by the FSVs. Furthermore

  1. The neural correlates of negative prediction error signaling in human fear conditioning.

    PubMed

    Spoormaker, V I; Andrade, K C; Schröter, M S; Sturm, A; Goya-Maldonado, R; Sämann, P G; Czisch, M

    2011-02-01

    In a temporal difference (TD) learning approach to classical conditioning, a prediction error (PE) signal shifts from outcome deliverance to the onset of the conditioned stimulus. Omission of an expected outcome results in a negative PE signal, which is the initial step towards successful extinction. In order to visualize negative PE signaling during fear conditioning, we employed combined functional magnetic resonance (fMRI) and skin conductance response (SCR) measurements in a conditioning task with visual stimuli and mild electrical shocks. Positive PE signaling was associated with increased activation in the bilateral insula, supplementary motor area, brainstem, and visual cortices. Negative PE signaling was associated with increased activation in the ventromedial and dorsolateral prefrontal cortices, the left lateral orbital gyrus, the middle temporal gyri, angular gyri, and visual cortices. The involvement of the ventromedial prefrontal and orbitofrontal cortex in extinction learning has been well documented, and this study provides evidence for the notion that these regions are already involved in negative PE signaling during fear conditioning. PMID:20869454

  2. On the improvement of neural cryptography using erroneous transmitted information with error prediction.

    PubMed

    Allam, Ahmed M; Abbas, Hazem M

    2010-12-01

    Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize. PMID:20937580

  3. Prediction errors to emotional expressions: the roles of the amygdala in social referencing.

    PubMed

    Meffert, Harma; Brislin, Sarah J; White, Stuart F; Blair, James R

    2015-04-01

    Social referencing paradigms in humans and observational learning paradigms in animals suggest that emotional expressions are important for communicating valence. It has been proposed that these expressions initiate stimulus-reinforcement learning. Relatively little is known about the role of emotional expressions in reinforcement learning, particularly in the context of social referencing. In this study, we examined object valence learning in the context of a social referencing paradigm. Participants viewed objects and faces that turned toward the objects and displayed a fearful, happy or neutral reaction to them, while judging the gender of these faces. Notably, amygdala activation was larger when the expressions following an object were less expected. Moreover, when asked, participants were both more likely to want to approach, and showed stronger amygdala responses to, objects associated with happy relative to objects associated with fearful expressions. This suggests that the amygdala plays two roles in social referencing: (i) initiating learning regarding the valence of an object as a function of prediction errors to expressions displayed toward this object and (ii) orchestrating an emotional response to the object when value judgments are being made regarding this object. PMID:24939872

  4. Scaling of Perceptual Errors Can Predict the Shape of Neural Tuning Curves

    NASA Astrophysics Data System (ADS)

    Shouval, Harel Z.; Agarwal, Animesh; Gavornik, Jeffrey P.

    2013-04-01

    Weber’s law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber’s law remains unknown. This work presents a simple theory explaining the conditions under which Weber’s law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber’s law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber’s law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber’s law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber’s law and may represent a general governing principle relating perception to neural activity.

  5. [Prediction of spatial distribution of forest carbon storage in Heilongjiang Province using spatial error model].

    PubMed

    Liu, Chang; Li, Feng-Ri; Zhen, Zhen

    2014-10-01

    Abstract: Based on the data from Chinese National Forest Inventory (CNFI) and Key Ecological Benefit Forest Monitoring plots (5075 in total) in Heilongjiang Province in 2010 and concurrent meteorological data coming from 59 meteorological stations located in Heilongjiang, Jilin and Inner Mongolia, this paper established a spatial error model (SEM) by GeoDA using carbon storage as dependent variable and several independent variables, including diameter of living trees (DBH), number of trees per hectare (TPH), elevation (Elev), slope (Slope), and product of precipitation and temperature (Rain_Temp). Global Moran's I was computed for describing overall spatial autocorrelations of model results at different spatial scales. Local Moran's I was calculated at the optimal bandwidth (25 km) to present spatial distribution residuals. Intra-block spatial variances were computed to explain spatial heterogeneity of residuals. Finally, a spatial distribution map of carbon storage in Heilongjiang was visualized based on predictions. The results showed that the distribution of forest carbon storage in Heilongjiang had spatial effect and was significantly influenced by stand, topographic and meteorological factors, especially average DBH. SEM could solve the spatial autocorrelation and heterogeneity well. There were significant spatial differences in distribution of forest carbon storage. The carbon storage was mainly distributed in Zhangguangcai Mountain, Xiao Xing'an Mountain and Da Xing'an Mountain where dense, forests existed, rarely distributed in Songnen Plains, while Wanda Mountain had moderate-level carbon storage. PMID:25796882

  6. Prediction of DVH parameter changes due to setup errors for breast cancer treatment based on 2D portal dosimetry

    SciTech Connect

    Nijsten, S. M. J. J. G.; Elmpt, W. J. C. van; Mijnheer, B. J.; Minken, A. W. H.; Persoon, L. C. G. G.; Lambin, P.; Dekker, A. L. A. J.

    2009-01-15

    Electronic portal imaging devices (EPIDs) are increasingly used for portal dosimetry applications. In our department, EPIDs are clinically used for two-dimensional (2D) transit dosimetry. Predicted and measured portal dose images are compared to detect dose delivery errors caused for instance by setup errors or organ motion. The aim of this work is to develop a model to predict dose-volume histogram (DVH) changes due to setup errors during breast cancer treatment using 2D transit dosimetry. First, correlations between DVH parameter changes and 2D gamma parameters are investigated for different simulated setup errors, which are described by a binomial logistic regression model. The model calculates the probability that a DVH parameter changes more than a specific tolerance level and uses several gamma evaluation parameters for the planning target volume (PTV) projection in the EPID plane as input. Second, the predictive model is applied to clinically measured portal images. Predicted DVH parameter changes are compared to calculated DVH parameter changes using the measured setup error resulting from a dosimetric registration procedure. Statistical accuracy is investigated by using receiver operating characteristic (ROC) curves and values for the area under the curve (AUC), sensitivity, specificity, positive and negative predictive values. Changes in the mean PTV dose larger than 5%, and changes in V{sub 90} and V{sub 95} larger than 10% are accurately predicted based on a set of 2D gamma parameters. Most pronounced changes in the three DVH parameters are found for setup errors in the lateral-medial direction. AUC, sensitivity, specificity, and negative predictive values were between 85% and 100% while the positive predictive values were lower but still higher than 54%. Clinical predictive value is decreased due to the occurrence of patient rotations or breast deformations during treatment, but the overall reliability of the predictive model remains high. Based on our

  7. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  8. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    NASA Astrophysics Data System (ADS)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  9. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

  10. Preschool Speech Error Patterns Predict Articulation and Phonological Awareness Outcomes in Children with Histories of Speech Sound Disorders

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2013-01-01

    Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…

  11. Computer program to minimize prediction error in models from experiments with 16 hypercube points and 0 to 6 center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1982-01-01

    A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.

  12. Critical evaluation of parameter consistency and predictive uncertainty in hydrological modeling: A case study using Bayesian total error analysis

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Renard, Benjamin; Kavetski, Dmitri; Kuczera, George; Franks, Stewart William; Srikanthan, Sri

    2009-12-01

    The lack of a robust framework for quantifying the parametric and predictive uncertainty of conceptual rainfall-runoff (CRR) models remains a key challenge in hydrology. The Bayesian total error analysis (BATEA) methodology provides a comprehensive framework to hypothesize, infer, and evaluate probability models describing input, output, and model structural error. This paper assesses the ability of BATEA and standard calibration approaches (standard least squares (SLS) and weighted least squares (WLS)) to address two key requirements of uncertainty assessment: (1) reliable quantification of predictive uncertainty and (2) reliable estimation of parameter uncertainty. The case study presents a challenging calibration of the lumped GR4J model to a catchment with ephemeral responses and large rainfall gradients. Postcalibration diagnostics, including checks of predictive distributions using quantile-quantile analysis, suggest that while still far from perfect, BATEA satisfied its assumed probability models better than SLS and WLS. In addition, WLS/SLS parameter estimates were highly dependent on the selected rain gauge and calibration period. This will obscure potential relationships between CRR parameters and catchment attributes and prevent the development of meaningful regional relationships. Conversely, BATEA provided consistent, albeit more uncertain, parameter estimates and thus overcomes one of the obstacles to parameter regionalization. However, significant departures from the calibration assumptions remained even in BATEA, e.g., systematic overestimation of predictive uncertainty, especially in validation. This is likely due to the inferred rainfall errors compensating for simplified treatment of model structural error.

  13. High post-treatment absolute monocyte count predicted hepatocellular carcinoma risk in HCV patients who failed peginterferon/ribavirin therapy.

    PubMed

    Chen, Tsung-Ming; Lin, Chun-Che; Huang, Pi-Teh; Wen, Chen-Fan

    2016-06-01

    Salient studies have investigated the association between host inflammatory response and cancer. This study was conducted to test the hypothesis that peripheral absolute monocyte counts (AMC) could impart an increased risk of hepatocellular carcinoma (HCC) development in hepatitis C virus (HCV)-infected patients after a failed peginterferon/ribavirin (PR) combination therapy. A total of 723 chronic HCV-infected patients were treated with PR, of which 183 (25.3 %) patients did not achieve a sustained virological response (non-SVR). Post-treatment AMC values were measured at 6 months after end of PR treatment. Fifteen (2.8 %) of 540 patients with an SVR developed HCC during a median follow-up period of 41.4 months, and 14 (7.7 %) of 183 non-SVR patients developed HCC during a median follow-up of 36.8 months (log rank test for SVR vs. non-SVR, P = 0.002). Cox regression analysis revealed that post-treatment AFP level (HR 1.070; 95 % CI = 1.024-1.119, P = 0.003) and post-treatment aspartate aminotransferase (AST)-to-platelet ratio index (APRI) ≥0.5 (HR 4.401; 95 % CI = 1.463-13.233, P = 0.008) were independent variables associated with HCC development for SVR patients. For non-SVR patients, diabetes (HR 5.750; 95 % CI = 1.387-23.841, P = 0.016), post treatment AMC ≥370 mm(-3) (HR 5.805; 95 % CI = 1.268-26.573, P = 0.023), and post-treatment APRI ≥1.5 (HR 10.905; 95 % CI = 2.493-47.697, P = 0.002) were independent risks associated with HCC. In conclusion, post-treatment AMC has a role in prognostication of HCC development in HCV-infected patients who failed to achieve an SVR after PR combination therapy. PMID:26662957

  14. Beyond reward prediction errors: the role of dopamine in movement kinematics

    PubMed Central

    Barter, Joseph W.; Li, Suellen; Lu, Dongye; Bartholomew, Ryan A.; Rossi, Mark A.; Shoemaker, Charles T.; Salas-Meza, Daniel; Gaidis, Erin; Yin, Henry H.

    2015-01-01

    We recorded activity of dopamine (DA) neurons in the substantia nigra pars compacta in unrestrained mice while monitoring their movements with video tracking. Our approach allows an unbiased examination of the continuous relationship between single unit activity and behavior. Although DA neurons show characteristic burst firing following cue or reward presentation, as previously reported, their activity can be explained by the representation of actual movement kinematics. Unlike neighboring pars reticulata GABAergic output neurons, which can represent vector components of position, DA neurons represent vector components of velocity or acceleration. We found neurons related to movements in four directions—up, down, left, right. For horizontal movements, there is significant lateralization of neurons: the left nigra contains more rightward neurons, whereas the right nigra contains more leftward neurons. The relationship between DA activity and movement kinematics was found on both appetitive trials using sucrose and aversive trials using air puff, showing that these neurons belong to a velocity control circuit that can be used for any number of purposes, whether to seek reward or to avoid harm. In support of this conclusion, mimicry of the phasic activation of DA neurons with selective optogenetic stimulation could also generate movements. Contrary to the popular hypothesis that DA neurons encode reward prediction errors, our results suggest that nigrostriatal DA plays an essential role in controlling the kinematics of voluntary movements. We hypothesize that DA signaling implements gain adjustment for adaptive transition control, and describe a new model of the basal ganglia (BG) in which DA functions to adjust the gain of the transition controller. This model has significant implications for our understanding of movement disorders implicating DA and the BG. PMID:26074791

  15. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  16. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches. PMID:21233046

  17. Predicting pilot-error incidents of US airline pilots using logistic regression.

    PubMed

    McFadden, K L

    1997-06-01

    In a population of 70,164 airline pilots obtained from the Federal Aviation Administration, 475 males and 22 females had pilot-error incidents in the years 1986-1992. A simple chi-squared test revealed that female pilots employed by major airlines had a significantly greater likelihood of pilot-error incidents than their male colleagues. In order to control for age, experience (total flying hours), risk exposure (recent flying hours) and employer (major/non-major airline) simultaneously, the author built a model of male pilot-error incidents using logistic regression. The regression analysis indicated that youth, inexperience and non-major airline employer were independent contributors to the increased risk of pilot-error incidents. The results also provide further support to the literature that pilot performance does not differ significantly between male and female airline pilots. PMID:9414359

  18. Error metrics for predicting discrimination of original and spectrally altered musical instrument sounds

    NASA Astrophysics Data System (ADS)

    Beauchamp, James W.; Horner, Andrew

    2003-10-01

    The correspondence of various error metrics to human discrimination data was investigated. Time-varying harmonic amplitude data were obtained from spectral analysis of eight musical instrument sounds (bassoon, clarinet, flute, horn, oboe, saxophone, trumpet, and violin). The data were altered using fixed random multipliers on the harmonic amplitudes, and the sounds were additively resynthesized with estimated average spectral errors ranging from 1% to 50%. Listeners attempted to discriminate the randomly altered sounds from reference sounds resynthesized from the original data. Then, various error metrics were used to calculate the spectral differences between the original and altered sounds, and the R2 correspondence between the error metrics and the discrimination data was measured. A relative-amplitude spectral error metric gave the best correspondence to average subject discrimination data, capturing over 90% of the variation relative to a Fourth-order regression curve, although other formulas gave similar results. Error metrics which used a small number of representative analysis frames gave results which compared favorably to using all frames of the analysis.

  19. A nuclear plant accident diagnosis method to support prediction of errors of commission

    SciTech Connect

    Chang, Y. H. J.; Coyne, K.; Mosleh, A.

    2006-07-01

    The identification and mitigation of operator errors of commission (EOCs) continue to be a major focus of nuclear plant human reliability research. Current Human Reliability Analysis (HRA) methods for predicting EOCs generally rely on the availability of operating procedures or extensive use of expert judgment. Consequently, an analysis for EOCs cannot easily be performed for actions that may be taken outside the scope of the operating procedures. Additionally, current HRA techniques rarely capture an operator's 'creative' problem-solving behavior. However, a nuclear plant operator knowledge base developed for the use with the IDAC (Information, Decision, and Action in Crew context) cognitive model shows potential for addressing these limitations. This operator knowledge base currently includes an event-symptom diagnosis matrix for a pressurized water reactor (PWR) nuclear plant. The diagnosis matrix defines a probabilistic relationship between observed symptoms and plant events that models the operator's heuristic process for classifying a plant state. Observed symptoms are obtained from a dynamic thermal-hydraulic plant model and can be modified to account for the limitations of human perception and cognition. A fuzzy-logic inference technique is used to calculate the operator's confidence, or degree of belief, that a given plant event has occurred based on the observed symptoms. An event diagnosis can be categorized as either: (a) a generalized flow imbalance of basic thermal-hydraulic properties (e.g., a mass or energy flow imbalance in the reactor coolant system), or (b) a specific event type, such as a steam generator tube rupture or a reactor trip. When an operator is presented with incomplete or contradictory information, this diagnosis approach provides a means to identify situations where an operator might be misled to perform unsafe actions based on an incorrect diagnosis. This knowledge base model could also support identification of potential EOCs when

  20. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  1. Kamin Blocking Is Associated with Reduced Medial-Frontal Gyrus Activation: Implications for Prediction Error Abnormality in Schizophrenia

    PubMed Central

    Cross, Benjamin; Corcoran, Rhiannon

    2012-01-01

    The following study used 3-T functional magnetic resonance imaging (fMRI) to investigate the neural signature of Kamin blocking. Kamin blocking is an associative learning phenomenon seen where prior association of a stimulus (A) with an outcome blocks subsequent learning to an added stimulus (B) when both stimuli are later presented together (AB) with the same outcome. While there are a number of theoretical explanations of Kamin blocking, it is widely considered to exemplify the use of prediction error in learning, where learning occurs in proportion to the difference between expectation and outcome. In Kamin blocking as stimulus A fully predicts the outcome no prediction error is generated by the addition of stimulus B to form the compound stimulus AB, hence learning about it is “blocked”. Kamin blocking is disrupted in people with schizophrenia, their relatives and healthy individuals with high psychometrically-defined schizotypy. This disruption supports suggestions that abnormal prediction error is a core deficit that can help to explain the symptoms of schizophrenia. The present study tested 9 healthy volunteers on an f-MRI adaptation of Oades' “mouse in the house task”, the only task measuring Kamin blocking that shows disruption in schizophrenia patients that has been independently replicated. Participant's Kamin blocking scores were found to inversely correlate with Kamin-blocking-related activation within the prefrontal cortex, specifically the medial frontal gyrus. The medial frontal gyrus has been associated with the psychological construct of uncertainty, which we suggest is consistent with disrupted Kamin blocking and demonstrated in people with schizophrenia. These data suggest that the medial frontal gyrus merits further investigation as a potential locus of reduced Kamin blocking and abnormal prediction error in schizophrenia. PMID:23028415

  2. Design of a predictive targeting error simulator for MRI-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor

    2010-02-01

    Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.

  3. Prediction error and accuracy of intraocular lens power calculation in pediatric patient comparing SRK II and Pediatric IOL Calculator

    PubMed Central

    2010-01-01

    Background Despite growing number of intraocular lens power calculation formulas, there is no evidence that these formulas have good predictive accuracy in pediatric, whose eyes are still undergoing rapid growth and refractive changes. This study is intended to compare the prediction error and the accuracy of predictability of intraocular lens power calculation in pediatric patients at 3 month post cataract surgery with primary implantation of an intraocular lens using SRK II versus Pediatric IOL Calculator for pediatric intraocular lens calculation. Pediatric IOL Calculator is a modification of SRK II using Holladay algorithm. This program attempts to predict the refraction of a pseudophakic child as he grows, using a Holladay algorithm model. This model is based on refraction measurements of pediatric aphakic eyes. Pediatric IOL Calculator uses computer software for intraocular lens calculation. Methods This comparative study consists of 31 eyes (24 patients) that successfully underwent cataract surgery and intraocular lens implantations. All patients were 12 years old and below (range: 4 months to 12 years old). Patients were randomized into 2 groups; SRK II group and Pediatric IOL Calculator group using envelope technique sampling procedure. Intraocular lens power calculations were made using either SRK II or Pediatric IOL Calculator for pediatric intraocular lens calculation based on the printed technique selected for every patient. Thirteen patients were assigned for SRK II group and another 11 patients for Pediatric IOL Calculator group. For SRK II group, the predicted postoperative refraction is based on the patient's axial length and is aimed for emmetropic at the time of surgery. However for Pediatric IOL Calculator group, the predicted postoperative refraction is aimed for emmetropic spherical equivalent at age 2 years old. The postoperative refractive outcome was taken as the spherical equivalent of the refraction at 3 month postoperative follow-up. The

  4. Exploring the Fundamental Dynamics of Error-Based Motor Learning Using a Stationary Predictive-Saccade Task

    PubMed Central

    Wong, Aaron L.; Shelhamer, Mark

    2011-01-01

    The maintenance of movement accuracy uses prior performance errors to correct future motor plans; this motor-learning process ensures that movements remain quick and accurate. The control of predictive saccades, in which anticipatory movements are made to future targets before visual stimulus information becomes available, serves as an ideal paradigm to analyze how the motor system utilizes prior errors to drive movements to a desired goal. Predictive saccades constitute a stationary process (the mean and to a rough approximation the variability of the data do not vary over time, unlike a typical motor adaptation paradigm). This enables us to study inter-trial correlations, both on a trial-by-trial basis and across long blocks of trials. Saccade errors are found to be corrected on a trial-by-trial basis in a direction-specific manner (the next saccade made in the same direction will reflect a correction for errors made on the current saccade). Additionally, there is evidence for a second, modulating process that exhibits long memory. That is, performance information, as measured via inter-trial correlations, is strongly retained across a large number of saccades (about 100 trials). Together, this evidence indicates that the dynamics of motor learning exhibit complexities that must be carefully considered, as they cannot be fully described with current state-space (ARMA) modeling efforts. PMID:21966462

  5. When What You See Isn’t What You Get: Alcohol Cues, Alcohol Administration, Prediction Error, and Human Striatal Dopamine

    PubMed Central

    Yoder, Karmen K.; Morris, Evan D.; Constantinescu, Cristian C.; Cheng, Tee-Ean; Normandin, Marc D.; O’Connor, Sean J.; Kareken, David A.

    2010-01-01

    Background The mesolimbic dopamine (DA) system is implicated in the development and maintenance of alcohol drinking; however, the exact mechanisms by which DA regulates human alcohol consumption are unclear. This study assessed the distinct effects of alcohol-related cues and alcohol administration on striatal DA release in healthy humans. Methods Subjects underwent 3 PET scans with [11C]raclopride (RAC). Subjects were informed that they would receive either an IV Ringer’s lactate infusion or an alcohol (EtOH) infusion during scanning, with naturalistic visual and olfactory cues indicating which infusion would occur. Scans were acquired in the following sequence: (1) Baseline Scan: Neutral cues predicting a Ringer’s lactate infusion, (2) CUES Scan: Alcohol-related cues predicting alcohol infusion in a Ringer’s lactate solution, but with alcohol infusion after scanning to isolate the effects of cues, and (3) EtOH Scan: Neutral cues predicting Ringer’s, but with alcohol infusion during scanning (to isolate the effects of alcohol without confounding expectation or craving). Results Relative to baseline, striatal DA concentration decreased during CUES, but increased during EtOH. Conclusion While the results appear inconsistent with some animal experiments showing dopaminergic responses to alcohol’s conditioned cues, they can be understood in the context of the hypothesized role of the striatum in reward prediction error, and of animal studies showing that midbrain dopamine neurons decrease and increase firing rates during negative and positive prediction errors, respectively. We believe that our data are the first in humans to demonstrate such changes in striatal DA during reward prediction error. PMID:18976347

  6. The neurobiology of schizotypy: Fronto-striatal prediction error signal correlates with delusion-like beliefs in healthy people

    PubMed Central

    Corlett, P.R.; Fletcher, P.C.

    2012-01-01

    Healthy people sometimes report experiences and beliefs that are strikingly similar to the symptoms of psychosis in their bizarreness and the apparent lack of evidence supporting them. An important question is whether this represents merely a superficial resemblance or whether there is a genuine and deep similarity indicating, as some have suggested, a continuum between odd but healthy beliefs and the symptoms of psychotic illness. We sought to shed light on this question by determining whether the neural marker for prediction error - previously shown to be altered in early psychosis – is comparably altered in healthy individuals reporting schizotypal experiences and beliefs. We showed that non-clinical schizotypal experiences were significantly correlated with aberrant frontal and striatal prediction error signal. This correlation related to the distress associated with the beliefs. Given our previous observations that patients with first episode psychosis show altered neural responses to prediction error and that this alteration, in turn, relates to the severity of their delusional ideation, our results provide novel evidence in support of the view that schizotypy relates to psychosis at more than just a superficial descriptive level. However, the picture is a complex one in which the experiences, though associated with altered striatal responding, may provoke distress but may nonetheless be explained away, while an additional alteration in frontal cortical responding may allow the beliefs to become more delusion-like: intrusive and distressing. PMID:23079501

  7. Similarities between optimal precursors for ENSO events and optimally growing initial errors in El Niño predictions

    NASA Astrophysics Data System (ADS)

    Mu, Mu; Yu, Yanshan; Xu, Hui; Gong, Tingting

    2014-02-01

    With the Zebiak-Cane model, the relationship between the optimal precursors (OPR) for triggering the El Niño/Southern Oscillation (ENSO) events and the optimally growing initial errors (OGE) to the uncertainty in El Niño predictions is investigated using an approach based on the conditional nonlinear optimal perturbation. The computed OPR for El Niño events possesses sea surface temperature anomalies (SSTA) dipole over the equatorial central and eastern Pacific, plus positive thermocline depth anomalies in the entire equatorial Pacific. Based on the El Niño events triggered by the obtained OPRs, the OGE which cause the largest prediction errors are computed. It is found that the OPR and OGE share great similarities in terms of localization and spatial structure of the SSTA dipole pattern over the central and eastern Pacific and the relatively uniform thermocline depth anomalies in the equatorial Pacific. The resemblances are possibly caused by the same mechanism of the Bjerknes positive feedback. It implies that if additional observation instruments are deployed to the targeted observations with limited coverage, they should preferentially be deployed in the equatorial central and eastern Pacific, which has been determined as the sensitive area for ENSO prediction, to better detect the early signals for ENSO events and reduce the initial errors so as to improve the forecast skill.

  8. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    NASA Astrophysics Data System (ADS)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  9. Individual Differences in Working Memory Capacity Predict Action Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Miller, A. Eve; Watson, Jason M.; Strayer, David L.

    2012-01-01

    Neuroscience suggests that the anterior cingulate cortex (ACC) is responsible for conflict monitoring and the detection of errors in cognitive tasks, thereby contributing to the implementation of attentional control. Though individual differences in frontally mediated goal maintenance have clearly been shown to influence outward behavior in…

  10. Prediction of absolute risk of fragility fracture at 10 years in a Spanish population: validation of the WHO FRAX ™ tool in Spain

    PubMed Central

    2011-01-01

    Background Age-related bone loss is asymptomatic, and the morbidity of osteoporosis is secondary to the fractures that occur. Common sites of fracture include the spine, hip, forearm and proximal humerus. Fractures at the hip incur the greatest morbidity and mortality and give rise to the highest direct costs for health services. Their incidence increases exponentially with age. Independently changes in population demography, the age - and sex- specific incidence of osteoporotic fractures appears to be increasing in developing and developed countries. This could mean more than double the expected burden of osteoporotic fractures in the next 50 years. Methods/Design To assess the predictive power of the WHO FRAX™ tool to identify the subjects with the highest absolute risk of fragility fracture at 10 years in a Spanish population, a predictive validation study of the tool will be carried out. For this purpose, the participants recruited by 1999 will be assessed. These were referred to scan-DXA Department from primary healthcare centres, non hospital and hospital consultations. Study population: Patients attended in the national health services integrated into a FRIDEX cohort with at least one Dual-energy X-ray absorptiometry (DXA) measurement and one extensive questionnaire related to fracture risk factors. Measurements: At baseline bone mineral density measurement using DXA, clinical fracture risk factors questionnaire, dietary calcium intake assessment, history of previous fractures, and related drugs. Follow up by telephone interview to know fragility fractures in the 10 years with verification in electronic medical records and also to know the number of falls in the last year. The absolute risk of fracture will be estimated using the FRAX™ tool from the official web site. Discussion Since more than 10 years ago numerous publications have recognised the importance of other risk factors for new osteoporotic fractures in addition to low BMD. The extension of a

  11. General Approach to First-Order Error Prediction in Rigid Point Registration

    PubMed Central

    Fitzpatrick, J. Michael

    2015-01-01

    A general approach to the first-order analysis of error in rigid point registration is presented that accommodates fiducial localization error (FLE) that may be inhomogeneous (varying from point to point) and anisotropic (varying with direction) and also accommodates arbitrary weighting that may also be inhomogeneous and anisotropic. Covariances are derived for target registration error (TRE) and for weighted fiducial registration error (FRE) in terms of covariances of FLE, culminating in a simple implementation that encompasses all combinations of weightings and anisotropy. Furthermore, it is shown that for ideal weighting, in which the weighting matrix for each fiducial equals the inverse of the square root of the cross covariance of its two-space FLE, fluctuations of FRE and TRE are mutually independent. These results are validated by comparison with previously published expressions and by simulation. Furthermore, simulations for randomly generated fiducial positions and FLEs are presented that show that correlation is negligible correlation coefficient < 0.1 in the exact case for both ideal and uniform weighting (i.e., no weighting), the latter of which is employed in commercial surgical guidance systems. From these results we conclude that for these weighting schemes, while valid expressions exist relating the covariance of FRE to the covariance of TRE, there are no measures of the goodness of fit of the fiducials for a given registration that give to first order any information about the fluctuation of TRE from its expected value and none that give useful information in the exact case. Therefore, as estimators of registration accuracy, such measures should be approached with extreme caution both by the purveyors of guidance systems and by the practitioners who use them. PMID:21075718

  12. Ensemble prediction for nowcasting with a convection-permitting model - II: forecast error statistics

    NASA Astrophysics Data System (ADS)

    Bannister, R. N.; Migliorini, S.; Dixon, M. A. G.

    2011-05-01

    A 24-member ensemble of 1-h high-resolution forecasts over the Southern United Kingdom is used to study short-range forecast error statistics. The initial conditions are found from perturbations from an ensemble transform Kalman filter. Forecasts from this system are assumed to lie within the bounds of forecast error of an operational forecast system. Although noisy, this system is capable of producing physically reasonable statistics which are analysed and compared to statistics implied from a variational assimilation system. The variances for temperature errors for instance show structures that reflect convective activity. Some variables, notably potential temperature and specific humidity perturbations, have autocorrelation functions that deviate from 3-D isotropy at the convective-scale (horizontal scales less than 10 km). Other variables, notably the velocity potential for horizontal divergence perturbations, maintain 3-D isotropy at all scales. Geostrophic and hydrostatic balances are studied by examining correlations between terms in the divergence and vertical momentum equations respectively. Both balances are found to decay as the horizontal scale decreases. It is estimated that geostrophic balance becomes less important at scales smaller than 75 km, and hydrostatic balance becomes less important at scales smaller than 35 km, although more work is required to validate these findings. The implications of these results for high-resolution data assimilation are discussed.

  13. Prediction Errors in Learning Drug Response from Gene Expression Data – Influence of Labeling, Sample Size, and Machine Learning Algorithm

    PubMed Central

    Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian

    2013-01-01

    Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment. PMID:23894636

  14. Does absolute brain size really predict self-control? Hand-tracking training improves performance on the A-not-B task.

    PubMed

    Jelbert, S A; Taylor, A H; Gray, R D

    2016-02-01

    Large-scale, comparative cognition studies are set to revolutionize the way we investigate and understand the evolution of intelligence. However, the conclusions reached by such work have a key limitation: the cognitive tests themselves. If factors other than cognition can systematically affect the performance of a subset of animals on these tests, we risk drawing the wrong conclusions about how intelligence evolves. Here, we examined whether this is the case for the A-not-B task, recently used by MacLean and co-workers to study self-control among 36 different species. Non-primates performed poorly on this task; possibly because they have difficulty tracking the movements of a human demonstrator, and not because they lack self-control. To test this, we assessed the performance of New Caledonian crows on the A-not-B task before and after two types of training. New Caledonian crows trained to track rewards moved by a human demonstrator were more likely to pass the A-not-B test than birds trained on an unrelated choice task involving inhibitory control. Our findings demonstrate that overlooked task demands can affect performance on a cognitive task, and so bring into question MacLean's conclusion that absolute brain size best predicts self-control. PMID:26843555

  15. An initial state perturbation experiment with the GISS model. [random error effects on numerical weather prediction models

    NASA Technical Reports Server (NTRS)

    Spar, J.; Notario, J. J.; Quirk, W. J.

    1978-01-01

    Monthly mean global forecasts for January 1975 have been computed with the Goddard Institute for Space Studies model from four slightly different sets of initial conditions - a 'control' state and three random perturbations thereof - to simulate the effects of initial state uncertainty on forecast quality. Differences among the forecasts are examined in terms of energetics, synoptic patterns and forecast statistics. The 'noise level' of the model predictions is depicted on global maps of standard deviations of sea level pressures, 500 mb heights and 850 mb temperatures for the set of four forecasts. Initial small-scale random errors do not appear to result in any major degradation of the large-scale monthly mean forecast beyond that generated by the model itself, nor do they appear to represent the major source of large-scale forecast error.

  16. Methodology to predict long-term cancer survival from short-term data using Tobacco Cancer Risk and Absolute Cancer Cure models

    NASA Astrophysics Data System (ADS)

    Mould, R. F.; Lederman, M.; Tai, P.; Wong, J. K. M.

    2002-11-01

    Three parametric statistical models have been fully validated for cancer of the larynx for the prediction of long-term 15, 20 and 25 year cancer-specific survival fractions when short-term follow-up data was available for just 1-2 years after the end of treatment of the last patient. In all groups of cases the treatment period was only 5 years. Three disease stage groups were studied, T1N0, T2N0 and T3N0. The models are the Standard Lognormal (SLN) first proposed by Boag (1949 J. R. Stat. Soc. Series B 11 15-53) but only ever fully validated for cancer of the cervix, Mould and Boag (1975 Br. J. Cancer 32 529-50), and two new models which have been termed Tobacco Cancer Risk (TCR) and Absolute Cancer Cure (ACC). In each, the frequency distribution of survival times of defined groups of cancer deaths is lognormally distributed: larynx only (SLN), larynx and lung (TCR) and all cancers (ACC). All models each have three unknown parameters but it was possible to estimate a value for the lognormal parameter S a priori. By reduction to two unknown parameters the model stability has been improved. The material used to validate the methodology consisted of case histories of 965 patients, all treated during the period 1944-1968 by Dr Manuel Lederman of the Royal Marsden Hospital, London, with follow-up to 1988. This provided a follow-up range of 20- 44 years and enabled predicted long-term survival fractions to be compared with the actual survival fractions, calculated by the Kaplan and Meier (1958 J. Am. Stat. Assoc. 53 457-82) method. The TCR and ACC models are better than the SLN model and for a maximum short-term follow-up of 6 years, the 20 and 25 year survival fractions could be predicted. Therefore the numbers of follow-up years saved are respectively 14 years and 19 years. Clinical trial results using the TCR and ACC models can thus be analysed much earlier than currently possible. Absolute cure from cancer was also studied, using not only the prediction models which

  17. Mitigation of Atmospheric Delay in SAR Absolute Ranging Using Global Numerical Weather Prediction Data: Corner Reflector Experiments at 3 Different Test Sites

    NASA Astrophysics Data System (ADS)

    Cong, Xiaoying; Balss, Ulrich; Eineder, Michael

    2015-04-01

    The atmospheric delay due to vertical stratification, the so-called stratified atmospheric delay, has a great impact on both interferometric and absolute range measurements. In our current researches [1][2][3], centimeter-range accuracy has been proven based on Corner Reflector (CR) based measurements by applying atmospheric delay correction using the Zenith Path Delay (ZPD) corrections derived from nearby Global Positioning System (GPS) stations. For a global usage, an effective method has been introduced to estimate the stratified delay based on global 4-dimensional Numerical Weather Prediction (NWP) products: the direct integration method [4][5]. Two products, ERA-Interim and operational data, provided by European Centre for Medium-Range Weather Forecast (ECMWF) are used to integrate the stratified delay. In order to access the integration accuracy, a validation approach is investigated based on ZPD derived from six permanent GPS stations located in different meteorological conditions. Range accuracy at centimeter level is demonstrated using both ECMWF products. Further experiments have been carried out in order to determine the best interpolation method by analyzing the temporal and spatial correlation of atmospheric delay using both ECMWF and GPS ZPD. Finally, the integrated atmospheric delays in slant direction (Slant Path Delay, SPD) have been applied instead of the GPS ZPD for CR experiments at three different test sites with more than 200 TerraSAR-X High Resolution SpotLight (HRSL) images. The delay accuracy is around 1-3 cm depending on the location of test site due to the local water vapor variation and the acquisition time/date. [1] Eineder M., Minet C., Steigenberger P., et al. Imaging geodesy - Toward centimeter-level ranging accuracy with TerraSAR-X. Geoscience and Remote Sensing, IEEE Transactions on, 2011, 49(2): 661-671. [2] Balss U., Gisinger C., Cong X. Y., et al. Precise Measurements on the Absolute Localization Accuracy of TerraSAR-X on the

  18. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes

    PubMed Central

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D.

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called “model-based” functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes. PMID:23882174

  19. Solutions with precise prediction for thermal aberration error in low-k1 immersion lithography

    NASA Astrophysics Data System (ADS)

    Fukuhara, Kazuya; Mimotogi, Akiko; Kono, Takuya; Aoyama, Hajime; Ogata, Taro; Kita, Naonori; Matsuyama, Tomoyuki

    2013-04-01

    Thermal aberration becomes a serious problem in the production of semiconductors for which low-k1 immersion lithography with a strong off-axis illumination, such as dipole setting, is used. The illumination setting localizes energy of the light in the projection lens, bringing about localized temperature rise. The temperature change varies lens refractive index and thus generates aberrations. The phenomenon is called thermal aberration. For realizing manufacturability of fine patterns with high productivity, thermal aberration control is important. Since heating areas in the projection lens are determined by source shape and distribution of diffracted light by a mask, the diffracted pupilgram convolving illumination source shape with diffraction distribution can be calculated using mask layout data for the thermal aberration prediction. Thermal aberration is calculated as a function of accumulated irradiation power. We have evaluated the thermal aberration computational prediction and control technology "Thermal Aberration Optimizer" (ThAO) on a Nikon immersion system. The thermal aberration prediction consists of two steps. The first step is prediction of the diffraction map on the projection pupil. The second step is computing thermal aberration from the diffraction map using a lens thermal model and an aberration correction function. We performed a verification test for ThAO using a mask of 1x-nm memory and strong off-axis illumination. We clarified the current performance of thermal aberration prediction, and also confirmed that the impacts of thermal aberration of NSR-S621D on CD and overlay for our 1x-nm memory pattern are very small. Accurate thermal aberration prediction with ThAO will enable thermal aberration risk-free lithography for semiconductor chip production.

  20. Predicting wafer-level IP error due to particle-induced EUVL reticle distortion during exposure chucking

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Vasu; Mikkelson, Andrew; Engelstad, Roxann; Lovell, Edward

    2005-11-01

    The mechanical distortion of an EUVL mask from mounting in an exposure tool can be a significant source of wafer-level image placement error. In particular, the presence of debris lodged between the reticle and chuck can cause the mask to experience out-of-plane distortion and in-plane distortion. A thorough understanding of the response of the reticle/particle/chuck system during electrostatic chucking is necessary to predict the resulting effects of such particle contamination on image placement accuracy. In this research, finite element modeling is employed to simulate this response for typical clamping conditions.

  1. Revealing the most disturbing tendency error of Zebiak-Cane model associated with El Niño predictions by nonlinear forcing singular vector approach

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2015-05-01

    The nonlinear forcing singular vector (NFSV) approach is used to identify the most disturbing tendency error of the Zebiak-Cane model associated with El Niño predictions, which is most potential for yielding aggressively large prediction errors of El Niño events. The results show that only one NFSV exists for each of the predictions for the predetermined model El Niño events. These NFSVs cause the largest prediction error for the corresponding El Niño event in perfect initial condition scenario. It is found that the NFSVs often present large-scale zonal dipolar structures and are insensitive to the intensities of El Niño events, but are dependent on the prediction periods. In particular, the NFSVs associated with the predictions crossing through the growth phase of El Niño tend to exhibit a zonal dipolar pattern with positive anomalies in the equatorial central-western Pacific and negative anomalies in the equatorial eastern Pacific (denoted as "NFSV1"). Meanwhile, those associated with the predictions through the decaying phase of El Niño are inclined to present another zonal dipolar pattern (denoted as "NFSV2"), which is almost opposite to the NFSV1. Similarly, the linear forcing singular vectors (FSVs), which are computed based on the tangent linear model, can also be classified into two types "FSV1" and "FSV2". We find that both FSV1 and NFSV1 often cause negative prediction errors for Niño-3 SSTA of the El Niño events, while the FSV2 and NFSV2 usually yield positive prediction errors. However, due to the effect of nonlinearities, the NFSVs usually have the western pole of the zonal dipolar pattern much farther west, and covering much broader region. The nonlinearities have a suppression effect on the growth of the prediction errors caused by the FSVs and the particular structure of the NFSVs tends to reduce such suppression effect of nonlinearities, finally making the NFSV-type tendency error yield much large prediction error for Niño-3 SSTA of El Ni

  2. Revealing the most disturbing tendency error of Zebiak-Cane model associated with El Nino predictions by nonlinear forcing singular vector approach

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo

    2016-04-01

    The nonlinear forcing singular vector (NFSV) approach is used to identify the most disturbing tendency error of the Zebiak-Cane model associated with El Niño predictions, which is most potential for yielding aggressively large prediction errors of El Niño events. The results show that only one NFSV exists for each of the predictions for the predetermined model El Niño events. These NFSVs cause the largest prediction error for the corresponding El Niño event in perfect initial condition scenario. It is found that the NFSVs often present largescale zonal dipolar structures and are insensitive to the intensities of El Niño events, but are dependent on the prediction periods. In particular, the NFSVs associated with the predictions crossing through the growth phase of El Niño tend to exhibit a zonal dipolar pattern with positive anomalies in the equatorial central-western Pacific and negative anomalies in the equatorial eastern Pacific (denoted as "NFSV1"). Meanwhile, those associated with the predictions through the decaying phase of El Niño are inclined to present another zonal dipolar pattern (denoted as "NFSV2"), which is almost opposite to the NFSV1. Similarly, the linear forcing singular vectors (FSVs), which are computed based on the tangent linear model, can also be classified into two types "FSV1" and "FSV2". We find that both FSV1 and NFSV1 often cause negative prediction errors for Niño-3 SSTA of the El Niño events, while the FSV2 and NFSV2 usually yield positive prediction errors. However, due to the effect of nonlinearities, the NFSVs usually have the western pole of the zonal dipolar pattern much farther west, and covering much broader region. The nonlinearities have a suppression effect on the growth of the prediction errors caused by the FSVs and the particular structure of the NFSVs tends to reduce such suppression effect of nonlinearities, finally making the NFSV-type tendency error yield much large prediction error for Niño-3 SSTA of El Ni

  3. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  4. Dynamics of combined initial-condition and model-related errors in a Quasi-Geostrophic prediction system

    NASA Astrophysics Data System (ADS)

    Perdigão, R. A. P.; Pires, C. A. L.; Vannitsem, S.

    2009-04-01

    Atmospheric prediction systems are known to suffer from fundamental uncertainties associated with their sensitivity to the initial conditions and with the inaccuracy in the model representation. A formulation for the error dynamics taking into account both these factors and intrinsic properties of the system has been developed in a study by Nicolis, Perdigao and Vannitsem (2008, in press). In the present study that study is generalized to systems of higher complexity. The extended approach admits systems with non-Euclidean metrics, multivariate perturbations, correlated and anisotropic initial errors, including error sources stemming from the data assimilation process. As in the low-order case, the formulation admits small perturbations relative to the attractor of the underlying dynamics and respective parameters, and contemplates the short to intermediate time regime. The underlying system is assumed to be governed by non-linear evolution laws with continuous derivatives, where the variables representing the unperturbed and perturbed models span the same manifold defined by a phase space with the same topological dimension. As a core ilustrative case a three-level Quasi-Geostrophic system with triangular truncation T21 is considered. While some generic features are identified that come in agreement with those seen in lower-order systems, further properties of physical relevance, stemming from the generalizations, are also unveiled.

  5. A model for predicting changes in the electrical conductivity, practical salinity, and absolute salinity of seawater due to variations in relative chemical composition

    NASA Astrophysics Data System (ADS)

    Pawlowicz, R.

    2010-03-01

    Salinity determination in seawater has been carried out for almost 30 years using the Practical Salinity Scale 1978. However, the numerical value of so-called practical salinity, computed from electrical conductivity, differs slightly from the true or absolute salinity, defined as the mass of dissolved solids per unit mass of seawater. The difference arises because more recent knowledge about the composition of seawater is not reflected in the definition of practical salinity, which was chosen to maintain historical continuity with previous measures, and because of spatial and temporal variations in the relative composition of seawater. Accounting for these spatial variations in density calculations requires the calculation of a correction factor δSA, which is known to range from 0 to 0.03 g kg-1 in the world oceans. Here a mathematical model relating compositional perturbations to δSA is developed, by combining a chemical model for the composition of seawater with a mathematical model for predicting the conductivity of multi-component aqueous solutions. Model calculations for this estimate of δSA, denoted δSRsoln, generally agree with estimates of δSA based on fits to direct density measurements, denoted δSRdens, and show that biogeochemical perturbations affect conductivity only weakly. However, small systematic differences between model and density-based estimates remain. These may arise for several reasons, including uncertainty about the biogeochemical processes involved in the increase in Total Alkalinity in the North Pacific, uncertainty in the carbon content of IAPSO standard seawater, and uncertainty about the haline contraction coefficient for the constituents involved in biogeochemical processes. This model may then be important in constraining these processes, as well as in future efforts to improve parameterizations for δSA.

  6. A model for predicting changes in the electrical conductivity, practical salinity, and absolute salinity of seawater due to variations in relative chemical composition

    NASA Astrophysics Data System (ADS)

    Pawlowicz, R.

    2009-11-01

    Salinity determination in seawater has been carried out for almost 30 years using the 1978 Practical Salinity Standard. However, the numerical value of so-called practical salinity, computed from electrical conductivity, differs slightly from the true or absolute salinity, defined as the mass of dissolved solids per unit mass of seawater. The difference arises because more recent knowledge about the composition of seawater is not reflected in the definition of practical salinity, which was chosen to maintain historical continuity with previous measures, and because of spatial and temporal variations in the relative composition of seawater. Accounting for these variations in density calculations requires the calculation of a correction factor δSA, which is known to range from 0 to 0.03 g kg-1 in the world oceans. Here a mathematical model relating compositional perturbations to δSA is developed, by combining a chemical model for the composition of seawater with a mathematical model for predicting the conductivity of multi-component aqueous solutions. Model calculations generally agree with estimates of δSA based on fits to direct density measurements, and show that biogeochemical perturbations affect conductivity only weakly. However, small systematic differences between model and density-based estimates remain. These may arise for several reasons, including uncertainty about the biogeochemical processes involved in the increase in Total Alkalinity in the North Pacific, uncertainty in the carbon content of IAPSO standard seawater, and uncertainty about the haline contraction coefficient for the constituents involved in biogeochemical processes. This model may then be important in constraining these processes, as well as in future efforts to improve parameterizations for δSA.

  7. Experimental validation of a thermal model used to predict the image placement error of a scanned EUVL reticle

    NASA Astrophysics Data System (ADS)

    Gianoulakis, Steven E.; Craig, Marcus J.; Ray-Chaudhuri, Avijit K.

    2000-07-01

    Lithographic masks must maintain dimensional stability during exposure in a lithographic tool to minimize subsequent overlay errors. In extreme ultraviolet lithography (EUVL), multilayer coatings are deposited on a mask substrate to make the mask surface reflective at EUV wavelengths. About 40% of the incident EUV light is absorbed by the multilayer coating which leads to a temperature rise. The choice of mask substrate material and absorber affects the magnitude of thermal distortion. Finite element modeling has been used to investigate potential mask materials and to explore the efficiency of various thermal management strategies. An experimental program was conducted to validate the thermal models used to predict the performance of EUV reticles. The experiments closely resembled actual conditions expected within the EUV tool. A reticle instrumented with temperature sensors was mounted on a scanning stage with an electrostatic chuck. An actively cooled isolation plate was mounted in front of the reticle for thermal management. Experimental power levels at the reticle corresponding to production throughput levels were utilized in the experiments. Both silicon and low expansion glass reticles were tested. Temperatures were measured a several locations on the reticle and tracked over time as the illuminated reticle was scanned. The experimental results coupled with the predictive modeling capability validates that the assertion that the use of a low expansion glass will satisfy image placement error requirements down to the 30 nm lithographic node.

  8. Backward deletion to minimize prediction errors in models from factorial experiments with zero to six center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1980-01-01

    Population model coefficients were chosen to simulate a saturated 2 to the 4th fixed-effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F-distribution, on an order statistics distribution of Cochran's, and on a combination of the two. The strategies were compared under the criterion of minimizing the maximum prediction error, wherever it occurred, among the two-level factorial points. The strategies were evaluated for each of the conditions of 0, 1, 2, 3, 4, 5, or 6 center points. Three classes of strategies were identified as being appropriate, depending on the extent of the experimenter's prior knowledge. In almost every case the best strategy was found to be unique according to the number of center points. Among the three classes of strategies, a security regret class of strategy was demonstrated as being widely useful in that over a range of coefficients of variation from 4 to 65%, the maximum predictive error was never increased by more than 12% over what it would have been if the best strategy had been used for the particular coefficient of variation. The relative efficiency of the experiment, when using the security regret strategy, was examined as a function of the number of center points, and was found to be best when the design used one center point.

  9. Using reads to annotate the genome: influence of length, background distribution, and sequence errors on prediction capacity.

    PubMed

    Philippe, Nicolas; Boureux, Anthony; Bréhélin, Laurent; Tarhio, Jorma; Commes, Thérèse; Rivals, Eric

    2009-08-01

    Ultra high-throughput sequencing is used to analyse the transcriptome or interactome at unprecedented depth on a genome-wide scale. These techniques yield short sequence reads that are then mapped on a genome sequence to predict putatively transcribed or protein-interacting regions. We argue that factors such as background distribution, sequence errors, and read length impact on the prediction capacity of sequence census experiments. Here we suggest a computational approach to measure these factors and analyse their influence on both transcriptomic and epigenomic assays. This investigation provides new clues on both methodological and biological issues. For instance, by analysing chromatin immunoprecipitation read sets, we estimate that 4.6% of reads are affected by SNPs. We show that, although the nucleotide error probability is low, it significantly increases with the position in the sequence. Choosing a read length above 19 bp practically eliminates the risk of finding irrelevant positions, while above 20 bp the number of uniquely mapped reads decreases. With our procedure, we obtain 0.6% false positives among genomic locations. Hence, even rare signatures should identify biologically relevant regions, if they are mapped on the genome. This indicates that digital transcriptomics may help to characterize the wealth of yet undiscovered, low-abundance transcripts. PMID:19531739

  10. Cognitive flexibility in adolescence: neural and behavioral mechanisms of reward prediction error processing in adaptive decision making during development.

    PubMed

    Hauser, Tobias U; Iannaccone, Reto; Walitza, Susanne; Brandeis, Daniel; Brem, Silvia

    2015-01-01

    Adolescence is associated with quickly changing environmental demands which require excellent adaptive skills and high cognitive flexibility. Feedback-guided adaptive learning and cognitive flexibility are driven by reward prediction error (RPE) signals, which indicate the accuracy of expectations and can be estimated using computational models. Despite the importance of cognitive flexibility during adolescence, only little is known about how RPE processing in cognitive flexibility deviates between adolescence and adulthood. In this study, we investigated the developmental aspects of cognitive flexibility by means of computational models and functional magnetic resonance imaging (fMRI). We compared the neural and behavioral correlates of cognitive flexibility in healthy adolescents (12-16years) to adults performing a probabilistic reversal learning task. Using a modified risk-sensitive reinforcement learning model, we found that adolescents learned faster from negative RPEs than adults. The fMRI analysis revealed that within the RPE network, the adolescents had a significantly altered RPE-response in the anterior insula. This effect seemed to be mainly driven by increased responses to negative prediction errors. In summary, our findings indicate that decision making in adolescence goes beyond merely increased reward-seeking behavior and provides a developmental perspective to the behavioral and neural mechanisms underlying cognitive flexibility in the context of reinforcement learning. PMID:25234119

  11. Using reads to annotate the genome: influence of length, background distribution, and sequence errors on prediction capacity

    PubMed Central

    Philippe, Nicolas; Boureux, Anthony; Bréhélin, Laurent; Tarhio, Jorma; Commes, Thérèse; Rivals, Éric

    2009-01-01

    Ultra high-throughput sequencing is used to analyse the transcriptome or interactome at unprecedented depth on a genome-wide scale. These techniques yield short sequence reads that are then mapped on a genome sequence to predict putatively transcribed or protein-interacting regions. We argue that factors such as background distribution, sequence errors, and read length impact on the prediction capacity of sequence census experiments. Here we suggest a computational approach to measure these factors and analyse their influence on both transcriptomic and epigenomic assays. This investigation provides new clues on both methodological and biological issues. For instance, by analysing chromatin immunoprecipitation read sets, we estimate that 4.6% of reads are affected by SNPs. We show that, although the nucleotide error probability is low, it significantly increases with the position in the sequence. Choosing a read length above 19 bp practically eliminates the risk of finding irrelevant positions, while above 20 bp the number of uniquely mapped reads decreases. With our procedure, we obtain 0.6% false positives among genomic locations. Hence, even rare signatures should identify biologically relevant regions, if they are mapped on the genome. This indicates that digital transcriptomics may help to characterize the wealth of yet undiscovered, low-abundance transcripts. PMID:19531739

  12. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  13. Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after Intensity-Modulated Radiotherapy for Head and Neck Cancer

    PubMed Central

    Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions

  14. A state-space approach to predict stream temperatures and quantify model error: Application on the Sacramento River, California

    NASA Astrophysics Data System (ADS)

    Pike, A.; Danner, E.; Lindley, S.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.

    2010-12-01

    In the Central Valley of California, river water temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. There is consequently strong interest in modeling water temperature dynamics in such regulated rivers. However, the accuracy of current stream temperature models is limited by the lack of spatially detailed meteorological forecasts, and few models quantify error due to uncertainty in model inputs. To address these issues, we developed a high-resolution deterministic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) in a state-space framework, and applied this model to Upper Sacramento River. The model uses a physically-based heat budgets to calculate the rate of heat transfer to/from the river. We consider heat transfer at the air-water interface using atmospheric variables provided by the TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) model—a high-resolution assimilation of satellite-derived meteorological observations and numerical weather simulations—as inputs. The TOPS-WRF framework allows us to improve the spatial and temporal resolution of stream temperature predictions. The hydrodynamics of the river (flow velocity and channel geometry) are characterized using densely-spaced channel cross-sections and flow data. Water temperatures are calculated by considering the hydrologic and thermal characteristics of the river and solving the advection-diffusion equation for heat transport in a mixed Eulerian-Lagrangian framework. We recast the advection-diffusion equation into a a state-space formulation, which linearizes the highly non-linear numerical system for rapid calculation using finite-difference techniques. We then implement a Kalman filter to assimilate measurement data from a series of five temperature gages in our study region. This

  15. A New Height Error Revision Method of Predicting Long-Term Wind Speed with MCP Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yujue; Hu, Fei

    2013-04-01

    Wind energy technology is one of the fastest in growing rate in new and renewable energy technologies. It is very important to select stronger windy sites in a country for the purpose of producing more electricity. Measure-Correlate-Predict (MCP) algorithms are used to predict the wind resource at target site for wind power development. MCP method model bases on a relationship between wind data (speed and direction) measured at the target site and concurrent wind data at reference site nearby. The model is then used with long-term data from the reference site to predict the long-term wind speed and direction distributions at the target site. MCP method is in order to be able to determine the annual energy capture of a wind farm located at the target site. Over the last 15 years well over a half dozen of MCP methods in the literature. The MCP algorithms differ in terms of overall approach, model definition, use of direction sectors, and length of the data. Such as 1)a linear regression model; 2)a model using distributions of ratios of wind speeds at two sites; 3)a vector regression method; 4)a method based on the ratio of standard deviations of two data sets, etc. Unfortunately, none of these MCP algorithms can predict wind speed from two sites at different altitudes. If the target site is much higher or lower than the reference site, the result accuracy will be much poorer. Inner Mongolia grassland is known as one of the regions that rich in wind resource in China. The data we use is from three wind measurements, consisting of nearly one year of six layers in XiLinGuoLe of Inner Mongolia . Firstly, we use the maximum likelihood method to estimate k, shape parameter and c, scale parameter of the Weibull function for different time periods. And then we find out that c has a power law function of height, and that k varies as the form of a quadratic function of height and obtains the max value in the height of 10 to100 meters. Finally, we add the height distribution

  16. Formulation of a General Technique for Predicting Pneumatic Attenuation Errors in Airborne Pressure Sensing Devices

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.

    1988-01-01

    Presented is a mathematical model derived from the Navier-Stokes equations of momentum and continuity, which may be accurately used to predict the behavior of conventionally mounted pneumatic sensing systems subject to arbitrary pressure inputs. Numerical techniques for solving the general model are developed. Both step and frequency response lab tests were performed. These data are compared with solutions of the mathematical model and show excellent agreement. The procedures used to obtain the lab data are described. In-flight step and frequency response data were obtained. Comparisons with numerical solutions of the math model show good agreement. Procedures used to obtain the flight data are described. Difficulties encountered with obtaining the flight data are discussed.

  17. Measuring the Effect of Inter-Study Variability on Estimating Prediction Error

    PubMed Central

    Ma, Shuyi; Sung, Jaeyun; Magis, Andrew T.; Wang, Yuliang; Geman, Donald; Price, Nathan D.

    2014-01-01

    Background The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in “batch-effects”) and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies. Methods Here we quantify the impact of these combined “study-effects” on a disease signature’s predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV), which extracts random subsets of samples for testing, and inter-study validation (ISV), which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance. Results As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification. Conclusions We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when

  18. A Physiologically Based Pharmacokinetic Model to Predict the Pharmacokinetics of Highly Protein-Bound Drugs and Impact of Errors in Plasma Protein Binding

    PubMed Central

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2015-01-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057

  19. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  20. Cloud Condensation Nuclei Prediction Error from Application of Kohler Theory: Importance for the Aerosol Indirect Effect

    NASA Technical Reports Server (NTRS)

    Sotiropoulou, Rafaella-Eleni P.; Nenes, Athanasios; Adams, Peter J.; Seinfeld, John H.

    2007-01-01

    In situ observations of aerosol and cloud condensation nuclei (CCN) and the GISS GCM Model II' with an online aerosol simulation and explicit aerosol-cloud interactions are used to quantify the uncertainty in radiative forcing and autoconversion rate from application of Kohler theory. Simulations suggest that application of Koehler theory introduces a 10-20% uncertainty in global average indirect forcing and 2-11% uncertainty in autoconversion. Regionally, the uncertainty in indirect forcing ranges between 10-20%, and 5-50% for autoconversion. These results are insensitive to the range of updraft velocity and water vapor uptake coefficient considered. This study suggests that Koehler theory (as implemented in climate models) is not a significant source of uncertainty for aerosol indirect forcing but can be substantial for assessments of aerosol effects on the hydrological cycle in climatically sensitive regions of the globe. This implies that improvements in the representation of GCM subgrid processes and aerosol size distribution will mostly benefit indirect forcing assessments. Predictions of autoconversion, by nature, will be subject to considerable uncertainty; its reduction may require explicit representation of size-resolved aerosol composition and mixing state.

  1. From prediction error to incentive salience: mesolimbic computation of reward motivation

    PubMed Central

    Berridge, Kent C.

    2011-01-01

    Reward contains separable psychological components of learning, incentive motivation and pleasure. Most computational models have focused only on the learning component of reward, but the motivational component is equally important in reward circuitry, and even more directly controls behavior. Modeling the motivational component requires recognition of additional control factors besides learning. Here I will discuss how mesocorticolimbic mechanisms generate the motivation component of incentive salience. Incentive salience takes Pavlovian learning and memory as one input and as an equally important input takes neurobiological state factors (e.g., drug states, appetite states, satiety states) that can vary independently of learning. Neurobiological state changes can produce unlearned fluctuations or even reversals in the ability of a previously-learned reward cue to trigger motivation. Such fluctuations in cue-triggered motivation can dramatically depart from all previously learned values about the associated reward outcome. Thus a consequence of the difference between incentive salience and learning can be to decouple cue-triggered motivation of the moment from previously learned values of how good the associated reward has been in the past. Another consequence can be to produce irrationally strong motivation urges that are not justified by any memories of previous reward values (and without distorting associative predictions of future reward value). Such irrationally strong motivation may be especially problematic in addiction. To comprehend these phenomena, future models of mesocorticolimbic reward function should address the neurobiological state factors that participate to control generation of incentive salience. PMID:22487042

  2. Encoding of both positive and negative reward prediction errors by neurons of the primate lateral prefrontal cortex and caudate nucleus.

    PubMed

    Asaad, Wael F; Eskandar, Emad N

    2011-12-01

    Learning can be motivated by unanticipated success or unexpected failure. The former encourages us to repeat an action or activity, whereas the latter leads us to find an alternative strategy. Understanding the neural representation of these unexpected events is therefore critical to elucidate learning-related circuits. We examined the activity of neurons in the lateral prefrontal cortex (PFC) and caudate nucleus of monkeys as they performed a trial-and-error learning task. Unexpected outcomes were widely represented in both structures, and neurons driven by unexpectedly negative outcomes were as frequent as those activated by unexpectedly positive outcomes. Moreover, both positive and negative reward prediction errors (RPEs) were represented primarily by increases in firing rate, unlike the manner in which dopamine neurons have been observed to reflect these values. Interestingly, positive RPEs tended to appear with shorter latency than negative RPEs, perhaps reflecting the mechanism of their generation. Last, in the PFC but not the caudate, trial-by-trial variations in outcome-related activity were linked to the animals' subsequent behavioral decisions. More broadly, the robustness of RPE signaling by these neurons suggests that actor-critic models of reinforcement learning in which the PFC and particularly the caudate are considered primarily to be "actors" rather than "critics," should be reconsidered to include a prominent evaluative role for these structures. PMID:22159094

  3. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  4. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  5. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  6. Distinguishing the effects of model structural error and parameter uncertainty on predictions of pesticide leaching under climate change

    NASA Astrophysics Data System (ADS)

    Steffens, K.; Larsbo, M.; Moeys, J.; Jarvis, N.; Lewan, E.

    2012-04-01

    Studying climate change impacts on pesticide leaching is laced with various sources of uncertainty, which must be assessed in as detailed way as possible in order to understand the reliability of predictions of pesticide leaching under current and future climate conditions. One dilemma in this respect is the difficulty in separating the effects of model structural error from parameter uncertainty. An example of the former is that most of the commonly-used pesticide transport models only consider temperature-dependent degradation, whereas temperature also influences transport in soils through its effect on sorption and diffusion. Especially for climate impact assessments of pesticide leaching, the processes and parameters that depend on soil temperature and moisture should be carefully considered. Two functions, one describing temperature-dependent sorption and one for temperature-dependent diffusion, were therefore introduced as options into the process-oriented 1D pesticide fate and transport model MACRO5.2, which resulted in four structurally different versions of the MACRO-model. The aims of the study were to assess (i) the uncertainty related to model structure in relation to parameter uncertainty and (ii) the importance of these sources of uncertainty in long-term predictions of leaching in the perspective of climate change. A case study for leaching of the mobile herbicide Bentazone was performed in a two-step procedure. First, acceptable parameter sets were identified by evaluating model performance using the Nash-Sutcliff criteria against comprehensive data from a one-year field experiment on a clay soil in Lanna (Southern Sweden). Eight sensitive and uncertain parameters were sampled from uniform distributions in a Monte-Carlo approach, separately for each of the four model versions. In a second step, each model-version with its particular ensemble of different acceptable parameter combinations was used to predict leaching for a present (1970-1999) and a

  7. A framework for testing the use of electric and electromagnetic data to reduce the prediction error of groundwater models

    NASA Astrophysics Data System (ADS)

    Christensen, N. K.; Christensen, S.; Ferre, T. P. A.

    2015-09-01

    Despite geophysics is being used increasingly, it is still unclear how and when the integration of geophysical data improves the construction and predictive capability of groundwater models. Therefore, this paper presents a newly developed HYdrogeophysical TEst-Bench (HYTEB) which is a collection of geological, groundwater and geophysical modeling and inversion software wrapped to make a platform for generation and consideration of multi-modal data for objective hydrologic analysis. It is intentionally flexible to allow for simple or sophisticated treatments of geophysical responses, hydrologic processes, parameterization, and inversion approaches. It can also be used to discover potential errors that can be introduced through petrophysical models and approaches to correlating geophysical and hydrologic parameters. With HYTEB we study alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity. It is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by regularization. For purely hydrologic inversion (HI, only using hydrologic data) we used Tikhonov regularization combined with singular value decomposition. For joint hydrogeophysical inversion (JHI) and sequential hydrogeophysical inversion (SHI) the resistivity estimates from TEM are used together with a petrophysical relationship to formulate the regularization term. In all cases, the regularization stabilizes the inversion, but neither the HI nor the JHI objective function could be minimized uniquely. SHI or JHI with

  8. Eosinophil count - absolute

    MedlinePlus

    Eosinophils; Absolute eosinophil count ... the white blood cell count to give the absolute eosinophil count. ... than 500 cells per microliter (cells/mcL). Normal value ranges may vary slightly among different laboratories. Talk ...

  9. No unified reward prediction error in local field potentials from the human nucleus accumbens: evidence from epilepsy patients.

    PubMed

    Stenner, Max-Philipp; Rutledge, Robb B; Zaehle, Tino; Schmitt, Friedhelm C; Kopitzki, Klaus; Kowski, Alexander B; Voges, Jürgen; Heinze, Hans-Jochen; Dolan, Raymond J

    2015-08-01

    Functional magnetic resonance imaging (fMRI), cyclic voltammetry, and single-unit electrophysiology studies suggest that signals measured in the nucleus accumbens (Nacc) during value-based decision making represent reward prediction errors (RPEs), the difference between actual and predicted rewards. Here, we studied the precise temporal and spectral pattern of reward-related signals in the human Nacc. We recorded local field potentials (LFPs) from the Nacc of six epilepsy patients during an economic decision-making task. On each trial, patients decided whether to accept or reject a gamble with equal probabilities of a monetary gain or loss. The behavior of four patients was consistent with choices being guided by value expectations. Expected value signals before outcome onset were observed in three of those patients, at varying latencies and with nonoverlapping spectral patterns. Signals after outcome onset were correlated with RPE regressors in all subjects. However, further analysis revealed that these signals were better explained as outcome valence rather than RPE signals, with gamble gains and losses differing in the power of beta oscillations and in evoked response amplitudes. Taken together, our results do not support the idea that postsynaptic potentials in the Nacc represent a RPE that unifies outcome magnitude and prior value expectation. We discuss the generalizability of our findings to healthy individuals and the relation of our results to measurements of RPE signals obtained from the Nacc with other methods. PMID:26019312

  10. No unified reward prediction error in local field potentials from the human nucleus accumbens: evidence from epilepsy patients

    PubMed Central

    Rutledge, Robb B.; Zaehle, Tino; Schmitt, Friedhelm C.; Kopitzki, Klaus; Kowski, Alexander B.; Voges, Jürgen; Heinze, Hans-Jochen; Dolan, Raymond J.

    2015-01-01

    Functional magnetic resonance imaging (fMRI), cyclic voltammetry, and single-unit electrophysiology studies suggest that signals measured in the nucleus accumbens (Nacc) during value-based decision making represent reward prediction errors (RPEs), the difference between actual and predicted rewards. Here, we studied the precise temporal and spectral pattern of reward-related signals in the human Nacc. We recorded local field potentials (LFPs) from the Nacc of six epilepsy patients during an economic decision-making task. On each trial, patients decided whether to accept or reject a gamble with equal probabilities of a monetary gain or loss. The behavior of four patients was consistent with choices being guided by value expectations. Expected value signals before outcome onset were observed in three of those patients, at varying latencies and with nonoverlapping spectral patterns. Signals after outcome onset were correlated with RPE regressors in all subjects. However, further analysis revealed that these signals were better explained as outcome valence rather than RPE signals, with gamble gains and losses differing in the power of beta oscillations and in evoked response amplitudes. Taken together, our results do not support the idea that postsynaptic potentials in the Nacc represent a RPE that unifies outcome magnitude and prior value expectation. We discuss the generalizability of our findings to healthy individuals and the relation of our results to measurements of RPE signals obtained from the Nacc with other methods. PMID:26019312

  11. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia.

    PubMed

    Doubková, Marcela; Van Dijk, Albert I J M; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-05-15

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have

  12. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have

  13. Improving HST Pointing & Absolute Astrometry

    NASA Astrophysics Data System (ADS)

    Lallo, Matthew; Nelan, E.; Kimmer, E.; Cox, C.; Casertano, S.

    2007-05-01

    Accurate absolute astrometry is becoming increasingly important in an era of multi-mission archives and virtual observatories. Hubble Space Telescope's (HST's) Guidestar Catalog II (GSC2) has reduced coordinate error to around 0.25 arcsecond, a factor 2 or more compared with GSC1. With this reduced catalog error, special attention must be given to calibrate and maintain the Fine Guidance Sensors (FGSs) and Science Instruments (SIs) alignments in HST to a level well below this in order to ensure that the accuracy of science product's astrometry keywords and target positioning are limited only by the catalog errors. After HST Servicing Mission 4, such calibrations' improvement in "blind" pointing accuracy will allow for more efficient COS acquisitions. Multiple SIs and FGSs each have their own footprints in the spatially shared HST focal plane. It is the small changes over time in primarily the whole-body positions & orientations of these instruments & guiders relative to one another that is addressed by this work. We describe the HST Cycle 15 program CAL/OTA 11021 which, along with future variants of it, determines and maintains positions and orientations of the SIs and FGSs to better than 50 milli- arcseconds and 0.04 to 0.004 degrees of roll, putting errors associated with the alignment sufficiently below GSC2 errors. We present recent alignment results and assess their errors, illustrate trends, and describe where and how the observer sees benefit from these calibrations when using HST.

  14. Why Don't We Learn to Accurately Forecast Feelings? How Misremembering Our Predictions Blinds Us to Past Forecasting Errors

    ERIC Educational Resources Information Center

    Meyvis, Tom; Ratner, Rebecca K.; Levav, Jonathan

    2010-01-01

    Why do affective forecasting errors persist in the face of repeated disconfirming evidence? Five studies demonstrate that people misremember their forecasts as consistent with their experience and thus fail to perceive the extent of their forecasting error. As a result, people do not learn from past forecasting errors and fail to adjust subsequent…

  15. Comparison of the initial errors most likely to cause a spring predictability barrier for two types of El Niño events

    NASA Astrophysics Data System (ADS)

    Tian, Ben; Duan, Wansuo

    2016-08-01

    In this paper, the spring predictability barrier (SPB) problem for two types of El Niño events is investigated. This is enabled by tracing the evolution of a conditional nonlinear optimal perturbation (CNOP) that acts as the initial error with the biggest negative effect on the El Niño predictions. We show that the CNOP-type errors for central Pacific-El Niño (CP-El Niño) events can be classified into two types: the first are CP-type-1 errors possessing a sea surface temperature anomaly (SSTA) pattern with negative anomalies in the equatorial central western Pacific, positive anomalies in the equatorial eastern Pacific, and accompanied by a thermocline depth anomaly pattern with positive anomalies along the equator. The second are, CP-type-2 errors presenting an SSTA pattern in the central eastern equatorial Pacific, with a dipole structure of negative anomalies in the east and positive anomalies in the west, and a thermocline depth anomaly pattern with a slight deepening along the equator. CP-type-1 errors grow in a manner similar to an eastern Pacific-El Niño (EP-El Niño) event and grow significantly during boreal spring, leading to a significant SPB for the CP-El Niño. CP-type-2 errors initially present as a process similar to a La Niña-like decay, prior to transitioning into a growth phase of an EP-El Niño-like event; but they fail to cause a SPB. For the EP-El Niño events, the CNOP-type errors are also classified into two types: EP-type-1 errors and 2 errors. The former is similar to a CP-type-1 error, while the latter presents with an almost opposite pattern. Both EP-type-1 and 2 errors yield a significant SPB for EP-El Niño events. For both CP- and EP-El Niño, their CNOP-type errors that cause a prominent SPB are concentrated in the central and eastern tropical Pacific. This may indicate that the prediction uncertainties of both types of El Niño events are sensitive to the initial errors in this region. The region may represent a common

  16. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation

    PubMed Central

    Cacciapaglia, Fabio; Wightman, R. Mark; Carelli, Regina M.

    2015-01-01

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. SIGNIFICANCE STATEMENT Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have

  17. Predictability of the Arctic sea ice edge

    NASA Astrophysics Data System (ADS)

    Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.

    2016-02-01

    Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.

  18. Modeling dopaminergic and other processes involved in learning from reward prediction error: contributions from an individual differences perspective.

    PubMed

    Pickering, Alan D; Pesola, Francesca

    2014-01-01

    Phasic firing changes of midbrain dopamine neurons have been widely characterized as reflecting a reward prediction error (RPE). Major personality traits (e.g., extraversion) have been linked to inter-individual variations in dopaminergic neurotransmission. Consistent with these two claims, recent research (Smillie et al., 2011; Cooper et al., 2014) found that extraverts exhibited larger RPEs than introverts, as reflected in feedback related negativity (FRN) effects in EEG recordings. Using an established, biologically-localized RPE computational model, we successfully simulated dopaminergic cell firing changes which are thought to modulate the FRN. We introduced simulated individual differences into the model: parameters were systematically varied, with stable values for each simulated individual. We explored whether a model parameter might be responsible for the observed covariance between extraversion and the FRN changes in real data, and argued that a parameter is a plausible source of such covariance if parameter variance, across simulated individuals, correlated almost perfectly with the size of the simulated dopaminergic FRN modulation, and created as much variance as possible in this simulated output. Several model parameters met these criteria, while others did not. In particular, variations in the strength of connections carrying excitatory reward drive inputs to midbrain dopaminergic cells were considered plausible candidates, along with variations in a parameter which scales the effects of dopamine cell firing bursts on synaptic modification in ventral striatum. We suggest possible neurotransmitter mechanisms underpinning these model parameters. Finally, the limitations and possible extensions of our general approach are discussed. PMID:25324752

  19. [Predicting visual acuity in media opacities and uncorrectable refractive errors. Assessing so-called "retinal visual acuity"].

    PubMed

    Lachenmayr, B

    1990-01-01

    Three different components contribute to the modulation transfer function of the visual system: (1) formation of the optical image (refractive media, pupil); (2) scattering of light in the prereceptoral layers of the retina; (3) neuronal processing in the retina und superior visual centers. In the presence of media opacities or non-correctable refractive errors, the clinical question often arises as to which macular function can be expected under the assumption of normal optical image formation (e.g. prior to cataract extraction, corneal transplantation, or vitrectomy). Simple tests such as light projection, color discrimination, and two-point discrimination cannot provide adequate information about macular function. The same holds true for the global luminance ERG. The X-ray phosphene is obsolete. The Maddox rod (with limitations), transilluminated Amsler grid, and various entoptic phenomena (Purkinje vascular phenomenon, foveal chagrin, Haidinger's brushes, blue field phenomenon) are available as qualitative subjective tests. Maxwellian view systems with pinhole aperture (potential acuity meter PAM) and the interferometers (retinometer, visometer, SITE-IRAS interferometer) provide quantitative subjective methods. The flash VECP is primarily a qualitative objective test that allows semiquantitative acuity prediction under special conditions (unilateral opacities). Psychophysical criteria that are less affected by the quality of the retinal image show promising developments in future subjective tests, e.g. optotypes in positive contrast, optotypes or targets superimposed on a background of optical noise, or hyperacuity. Future objective test developments are pattern VECP or even pattern ERG elicited by interferometric stimulation, speckle VECP and focal ERG. PMID:2083891

  20. Modeling dopaminergic and other processes involved in learning from reward prediction error: contributions from an individual differences perspective

    PubMed Central

    Pickering, Alan D.; Pesola, Francesca

    2014-01-01

    Phasic firing changes of midbrain dopamine neurons have been widely characterized as reflecting a reward prediction error (RPE). Major personality traits (e.g., extraversion) have been linked to inter-individual variations in dopaminergic neurotransmission. Consistent with these two claims, recent research (Smillie et al., 2011; Cooper et al., 2014) found that extraverts exhibited larger RPEs than introverts, as reflected in feedback related negativity (FRN) effects in EEG recordings. Using an established, biologically-localized RPE computational model, we successfully simulated dopaminergic cell firing changes which are thought to modulate the FRN. We introduced simulated individual differences into the model: parameters were systematically varied, with stable values for each simulated individual. We explored whether a model parameter might be responsible for the observed covariance between extraversion and the FRN changes in real data, and argued that a parameter is a plausible source of such covariance if parameter variance, across simulated individuals, correlated almost perfectly with the size of the simulated dopaminergic FRN modulation, and created as much variance as possible in this simulated output. Several model parameters met these criteria, while others did not. In particular, variations in the strength of connections carrying excitatory reward drive inputs to midbrain dopaminergic cells were considered plausible candidates, along with variations in a parameter which scales the effects of dopamine cell firing bursts on synaptic modification in ventral striatum. We suggest possible neurotransmitter mechanisms underpinning these model parameters. Finally, the limitations and possible extensions of our general approach are discussed. PMID:25324752

  1. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  2. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  3. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  4. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26822948

  5. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  6. Prediction of stream volatilization coefficients

    USGS Publications Warehouse

    Rathbun, Ronald E.

    1990-01-01

    Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.

  7. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error

    PubMed Central

    Verduzco-Flores, Sergio O.; O'Reilly, Randall C.

    2015-01-01

    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations. PMID:25852535

  8. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning

    PubMed Central

    Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor

  9. Phasic dopamine as a prediction error of intrinsic and extrinsic reinforcements driving both action acquisition and reward maximization: a simulated robotic study.

    PubMed

    Mirolli, Marco; Santucci, Vieri G; Baldassarre, Gianluca

    2013-03-01

    An important issue of recent neuroscientific research is to understand the functional role of the phasic release of dopamine in the striatum, and in particular its relation to reinforcement learning. The literature is split between two alternative hypotheses: one considers phasic dopamine as a reward prediction error similar to the computational TD-error, whose function is to guide an animal to maximize future rewards; the other holds that phasic dopamine is a sensory prediction error signal that lets the animal discover and acquire novel actions. In this paper we propose an original hypothesis that integrates these two contrasting positions: according to our view phasic dopamine represents a TD-like reinforcement prediction error learning signal determined by both unexpected changes in the environment (temporary, intrinsic reinforcements) and biological rewards (permanent, extrinsic reinforcements). Accordingly, dopamine plays the functional role of driving both the discovery and acquisition of novel actions and the maximization of future rewards. To validate our hypothesis we perform a series of experiments with a simulated robotic system that has to learn different skills in order to get rewards. We compare different versions of the system in which we vary the composition of the learning signal. The results show that only the system reinforced by both extrinsic and intrinsic reinforcements is able to reach high performance in sufficiently complex conditions. PMID:23353115

  10. Temporal Uncertainty and Temporal Estimation Errors Affect Insular Activity and the Frontostriatal Indirect Pathway during Action Update: A Predictive Coding Study

    PubMed Central

    Limongi, Roberto; Pérez, Francisco J.; Modroño, Cristián; González-Mora, José L.

    2016-01-01

    Action update, substituting a prepotent behavior with a new action, allows the organism to counteract surprising environmental demands. However, action update fails when the organism is uncertain about when to release the substituting behavior, when it faces temporal uncertainty. Predictive coding states that accurate perception demands minimization of precise prediction errors. Activity of the right anterior insula (rAI) is associated with temporal uncertainty. Therefore, we hypothesize that temporal uncertainty during action update would cause the AI to decrease the sensitivity to ascending prediction errors. Moreover, action update requires response inhibition which recruits the frontostriatal indirect pathway associated with motor control. Therefore, we also hypothesize that temporal estimation errors modulate frontostriatal connections. To test these hypotheses, we collected fMRI data when participants performed an action-update paradigm within the context of temporal estimation. We fit dynamic causal models to the imaging data. Competing models comprised the inferior occipital gyrus (IOG), right supramarginal gyrus (rSMG), rAI, right presupplementary motor area (rPreSMA), and the right striatum (rSTR). The winning model showed that temporal uncertainty drove activity into the rAI and decreased insular sensitivity to ascending prediction errors, as shown by weak connectivity strength of rSMG→rAI connections. Moreover, temporal estimation errors weakened rPreSMA→rSTR connections and also modulated rAI→rSTR connections, causing the disruption of action update. Results provide information about the neurophysiological implementation of the so-called horse-race model of action control. We suggest that, contrary to what might be believed, unsuccessful action update could be a homeostatic process that represents a Bayes optimal encoding of uncertainty. PMID:27445737

  11. Temporal Uncertainty and Temporal Estimation Errors Affect Insular Activity and the Frontostriatal Indirect Pathway during Action Update: A Predictive Coding Study.

    PubMed

    Limongi, Roberto; Pérez, Francisco J; Modroño, Cristián; González-Mora, José L

    2016-01-01

    Action update, substituting a prepotent behavior with a new action, allows the organism to counteract surprising environmental demands. However, action update fails when the organism is uncertain about when to release the substituting behavior, when it faces temporal uncertainty. Predictive coding states that accurate perception demands minimization of precise prediction errors. Activity of the right anterior insula (rAI) is associated with temporal uncertainty. Therefore, we hypothesize that temporal uncertainty during action update would cause the AI to decrease the sensitivity to ascending prediction errors. Moreover, action update requires response inhibition which recruits the frontostriatal indirect pathway associated with motor control. Therefore, we also hypothesize that temporal estimation errors modulate frontostriatal connections. To test these hypotheses, we collected fMRI data when participants performed an action-update paradigm within the context of temporal estimation. We fit dynamic causal models to the imaging data. Competing models comprised the inferior occipital gyrus (IOG), right supramarginal gyrus (rSMG), rAI, right presupplementary motor area (rPreSMA), and the right striatum (rSTR). The winning model showed that temporal uncertainty drove activity into the rAI and decreased insular sensitivity to ascending prediction errors, as shown by weak connectivity strength of rSMG→rAI connections. Moreover, temporal estimation errors weakened rPreSMA→rSTR connections and also modulated rAI→rSTR connections, causing the disruption of action update. Results provide information about the neurophysiological implementation of the so-called horse-race model of action control. We suggest that, contrary to what might be believed, unsuccessful action update could be a homeostatic process that represents a Bayes optimal encoding of uncertainty. PMID:27445737

  12. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  13. Prediction and error growth in the daily forecast of precipitation from the NCEP CFSv2 over the subdivisions of Indian subcontinent

    NASA Astrophysics Data System (ADS)

    Pandey, Dhruva Kumar; Rai, Shailendra; Sahai, A. K.; Abhilash, S.; Shahi, N. K.

    2016-02-01

    This study investigates the forecast skill and predictability of various indices of south Asian monsoon as well as the subdivisions of the Indian subcontinent during JJAS season for the time domain of 2001-2013 using NCEP CFSv2 output. It has been observed that the daily mean climatology of precipitation over the land points of India is underestimated in the model forecast as compared to observation. The monthly model bias of precipitation shows the dry bias over the land points of India and also over the Bay of Bengal, whereas the Himalayan and Arabian Sea regions show the wet bias. We have divided the Indian landmass into five subdivisions namely central India, southern India, Western Ghat, northeast and southern Bay of Bengal regions based on the spatial variation of observed mean precipitation in JJAS season. The underestimation over the land points of India during mature phase was originated from the central India, southern Bay of Bengal, southern India and Western Ghat regions. The error growth in June forecast is slower as compared to July forecast in all the regions. The predictability error also grows slowly in June forecast as compared to July forecast in most of the regions. The doubling time of predictability error was estimated to be in the range of 3-5 days for all the regions. Southern India and Western Ghats are more predictable in the July forecast as compared to June forecast, whereas IMR, northeast, central India and southern Bay of Bengal regions have the opposite nature.

  14. Moderation of the Relationship Between Reward Expectancy and Prediction Error-Related Ventral Striatal Reactivity by Anhedonia in Unmedicated Major Depressive Disorder: Findings From the EMBARC Study

    PubMed Central

    Greenberg, Tsafrir; Chase, Henry W.; Almeida, Jorge R.; Stiffler, Richelle; Zevallos, Carlos R.; Aslam, Haris A.; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G.; Oquendo, Maria A.; McGrath, Patrick J.; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H.; Phillips, Mary L.

    2016-01-01

    Objective Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error-(discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. Method A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Results Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. Conclusions The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward

  15. Evaluating the performance of the LPC (Linear Predictive Coding) 2.4 kbps (kilobits per second) processor with bit errors using a sentence verification task

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid; Kallman, Howard J.

    1987-11-01

    The comprehension of narrowband digital speech with bit errors was tested by using a sentence verification task. The use of predicates that were either strongly or weakly related to the subjects (e.g., A toad has warts./ A toad has eyes.) varied the difficulty of the verification task. The test conditions included unprocessed and processed speech using a 2.4 kb/s (kilobits per second) linear predictive coding (LPC) voice processing algorithm with random bit error rates of 0 percent, 2 percent, and 5 percent. In general, response accuracy decreased and reaction time increased with LPC processing and with increasing bit error rates. Weakly related true sentences and strongly related false sentences were more difficult than their counterparts. Interactions between sentence type and speech processing conditions are discussed.

  16. Absolute biological needs.

    PubMed

    McLeod, Stephen

    2014-07-01

    Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses. PMID:23586876

  17. [Error factors in spirometry].

    PubMed

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  18. The absolute path command

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  19. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  20. Relationship between optimal precursory disturbances and optimally growing initial errors associated with ENSO events: Implications to target observations for ENSO prediction

    NASA Astrophysics Data System (ADS)

    Hu, Junya; Duan, Wansuo

    2016-05-01

    By superimposing initial sea temperature disturbances in neutral years, we determine the precursory disturbances that are most likely to evolve into El Niño and La Niña events using an Earth System Model. These precursory disturbances for El Niño and La Niña events are deemed optimal precursory disturbances because they are more likely to trigger strong ENSO events. Specifically, the optimal precursory disturbance for El Niño exhibits negative sea surface temperature anomalies (SSTAs) in the central-eastern equatorial Pacific. Additionally, the subsurface temperature component exhibits negative anomalies in the upper layers of the eastern equatorial Pacific and positive anomalies in the lower layers of the western equatorial Pacific. The optimal precursory disturbance for La Niña is almost opposite to that of El Niño. The optimal precursory disturbances show that both El Niño and La Niña originate from precursory signals in the subsurface layers of the western equatorial Pacific and in the surface layers of the eastern equatorial Pacific. We find that the optimal precursory disturbances for El Niño and La Niña are particularly similar to the optimally growing initial errors associated with El Niño prediction that have been presented in previous studies. The optimally growing initial errors show that the optimal precursor source areas represent the sensitive areas for target observations associated with ENSO prediction. Combining the optimal precursory disturbances and the optimally growing initial errors for ENSO, we infer that additional observations in these sensitive areas can reduce initial errors and be used to detect precursory signals, thereby improving ENSO predictions.

  1. ResQ: An Approach to Unified Estimation of B-Factor and Residue-Specific Error in Protein Structure Prediction.

    PubMed

    Yang, Jianyi; Wang, Yan; Zhang, Yang

    2016-02-22

    Computer-based structure prediction becomes a major tool to provide large-scale structure models for annotating biological function of proteins. Information of residue-level accuracy and thermal mobility (or B-factor), which is critical to decide how biologists utilize the predicted models, is however missed in most structure prediction pipelines. We developed ResQ for unified residue-level model quality and B-factor estimations by combining local structure assembly variations with sequence-based and structure-based profiling. ResQ was tested on 635 non-redundant proteins with structure models generated by I-TASSER, where the average difference between estimated and observed distance errors is 1.4Å for the confidently modeled proteins. ResQ was further tested on structure decoys from CASP9-11 experiments, where the error of local structure quality prediction is consistently lower than or comparable to other state-of-the-art predictors. Finally, ResQ B-factor profile was used to assist molecular replacement, which resulted in successful solutions on several proteins that could not be solved from constant B-factor settings. PMID:26437129

  2. Study of Uncertainties of Predicting Space Shuttle Thermal Environment. [impact of heating rate prediction errors on weight of thermal protection system

    NASA Technical Reports Server (NTRS)

    Fehrman, A. L.; Masek, R. V.

    1972-01-01

    Quantitative estimates of the uncertainty in predicting aerodynamic heating rates for a fully reusable space shuttle system are developed and the impact of these uncertainties on Thermal Protection System (TPS) weight are discussed. The study approach consisted of statistical evaluations of the scatter of heating data on shuttle configurations about state-of-the-art heating prediction methods to define the uncertainty in these heating predictions. The uncertainties were then applied as heating rate increments to the nominal predicted heating rate to define the uncertainty in TPS weight. Separate evaluations were made for the booster and orbiter, for trajectories which included boost through reentry and touchdown. For purposes of analysis, the vehicle configuration is divided into areas in which a given prediction method is expected to apply, and separate uncertainty factors and corresponding uncertainty in TPS weight derived for each area.

  3. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  4. Enhanced error-related brain activity in children predicts the onset of anxiety disorders between the ages of 6 and 9

    PubMed Central

    Meyer, Alexandria; Proudfit, Greg Hajcak; Torpey-Newman, Dana C.; Kujawa, Autumn; Klein, Daniel N.

    2015-01-01

    Considering that anxiety disorders frequently begin before adulthood and often result in chronic impairment, it is important to characterize the developmental pathways leading to the onset of clinical anxiety. Identifying neural biomarkers that can predict the onset of anxiety in childhood may increase our understanding of the etiopathogenesis of anxiety, as well as inform intervention and prevention strategies. An event-related potential (ERP), the error-related negativity (ERN) has been proposed as a biomarker of risk for anxiety and has previously been associated with concurrent anxiety in both adults and children. However, no previous study has examined whether the ERN can predict the onset of anxiety disorders. In the current study, ERPs were recorded while 236 healthy children, approximately 6 years of age, performed a Go/No-Go task to measure the ERN. Three years later, children and parents came back to the lab and completed diagnostic interviews regarding anxiety disorder status. Results indicated that enhanced error-related brain activity at age 6 predicted the onset of new anxiety disorders by age 9, even when controlling for baseline anxiety symptoms and maternal history of anxiety. Considering the potential utility of identifying early biomarkers of risk, this is a novel and important extension of previous work. PMID:25643204

  5. Integrated Stable Isotope Labeling by Amino Acids in Cell Culture (SILAC) and Isobaric Tags for Relative and Absolute Quantitation (iTRAQ) Quantitative Proteomic Analysis Identifies Galectin-1 as a Potential Biomarker for Predicting Sorafenib Resistance in Liver Cancer*

    PubMed Central

    Yeh, Chao-Chi; Hsu, Chih-Hung; Shao, Yu-Yun; Ho, Wen-Ching; Tsai, Mong-Hsun; Feng, Wen-Chi; Chow, Lu-Ping

    2015-01-01

    Sorafenib has become the standard therapy for patients with advanced hepatocellular carcinoma (HCC). Unfortunately, most patients eventually develop acquired resistance. Therefore, it is important to identify potential biomarkers that could predict the efficacy of sorafenib. To identify target proteins associated with the development of sorafenib resistance, we applied stable isotope labelling with amino acids in cell culture (SILAC)-based quantitative proteomic approach to analyze differences in protein expression levels between parental HuH-7 and sorafenib-acquired resistance HuH-7 (HuH-7R) cells in vitro, combined with an isobaric tags for relative and absolute quantitation (iTRAQ) quantitative analysis of HuH-7 and HuH-7R tumors in vivo. In total, 2,450 quantified proteins were identified in common in SILAC and iTRAQ experiments, with 81 showing increased expression (>2.0-fold) with sorafenib resistance and 75 showing decreased expression (<0.5-fold). In silico analyses of these differentially expressed proteins predicted that 10 proteins were related to cancer with involvements in cell adhesion, migration, and invasion. Knockdown of one of these candidate proteins, galectin-1, decreased cell proliferation and metastasis in HuH-7R cells and restored sensitivity to sorafenib. We verified galectin-1 as a predictive marker of sorafenib resistance and a downstream target of the AKT/mTOR/HIF-1α signaling pathway. In addition, increased galectin-1 expression in HCC patients' serum was associated with poor tumor control and low response rate. We also found that a high serum galectin-1 level was an independent factor associated with poor progression-free survival and overall survival. In conclusion, these results suggest that galectin-1 is a possible biomarker for predicting the response of HCC patients to treatment with sorafenib. As such, it may assist in the stratification of HCC and help direct personalized therapy. PMID:25850433

  6. Development and application of an empirical probability distribution for the prediction error of re-entry body maximum dynamic pressure

    NASA Technical Reports Server (NTRS)

    Lanzi, R. James; Vincent, Brett T.

    1993-01-01

    The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.

  7. Determining what caused the error in the prediction of the December 1st, 2013 snow storm using the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Prajapati, Nikunjkumar; Trout, Joseph

    2014-03-01

    The severity of snow events in the northeast United States depends on the position of the pressure systems and the fronts. Although numerical models have improved greatly as computer power has increased, occasionally the forecasts of the pressure systems and fronts can have large margins of error. For example, the snow storm which passed over the north east coast on the week of December 1, 2013, which proved to be much more severe than predicted. In this research, The Weather Research and Forecasting Model(WRF-Model) is used to model the December 1, 2013 storm. Multiple simulations using nested, high resolution grids are compared. Research in computational atmospheric physics.

  8. Verbal Paradata and Survey Error: Respondent Speech, Voice, and Question-Answering Behavior Can Predict Income Item Nonresponse

    ERIC Educational Resources Information Center

    Jans, Matthew E.

    2010-01-01

    Income nonresponse is a significant problem in survey data, with rates as high as 50%, yet we know little about why it occurs. It is plausible that the way respondents answer survey questions (e.g., their voice and speech characteristics, and their question- answering behavior) can predict whether they will provide income data, and will reflect…

  9. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed. PMID:26022836

  10. A comparison of sequential assimilation schemes for ocean prediction with the HYbrid Coordinate Ocean Model (HYCOM): Twin experiments with static forecast error covariances

    NASA Astrophysics Data System (ADS)

    Srinivasan, A.; Chassignet, E. P.; Bertino, L.; Brankart, J. M.; Brasseur, P.; Chin, T. M.; Counillon, F.; Cummings, J. A.; Mariano, A. J.; Smedstad, O. M.; Thacker, W. C.

    We assess and compare four sequential data assimilation methods developed for HYCOM in an identical twin experiment framework. The methods considered are Multi-variate Optimal Interpolation (MVOI), Ensemble Optimal Interpolation (EnOI), the fixed basis version of the Singular Evolutive Extended Kalman Filter (SEEK) and the Ensemble Reduced Order Information Filter (EnROIF). All methods can be classified as statistical interpolation but differ mainly in how the forecast error covariances are modeled. Surface elevation and temperature data sampled from an 1/12° Gulf of Mexico HYCOM simulation designated as the truth are assimilated into an identical model starting from an erroneous initial state, and convergence of assimilative runs towards the truth is tracked. Sensitivity experiments are first performed to evaluate the impact of practical implementation choices such as the state vector structure, initialization procedures, correlation scales, covariance rank and details of handling multivariate datasets, and to identify an effective configuration for each assimilation method. The performance of the methods are then compared by examining the relative convergence of the assimilative runs towards the truth. All four methods show good skill and are able to enhance consistency between the assimilative and truth runs in both observed and unobserved model variables. Prediction errors in observed variables are typically less than the errors specified for the observations, and the differences between the assimilated products are small compared to the observation errors. For unobserved variables, RMS errors are reduced by 50% relative to a non-assimilative run and differ between schemes on average by about 5%. Dynamical consistency between the updated state space variables in the data assimilation algorithm, and the data adequately sampling significant dynamical features are the two crucial components for reliable predictions. The experiments presented here suggest that

  11. Optogenetic Stimulation in a Computational Model of the Basal Ganglia Biases Action Selection and Reward Prediction Error

    PubMed Central

    Berthet, Pierre; Lansner, Anders

    2014-01-01

    Optogenetic stimulation of specific types of medium spiny neurons (MSNs) in the striatum has been shown to bias the selection of mice in a two choices task. This shift is dependent on the localisation and on the intensity of the stimulation but also on the recent reward history. We have implemented a way to simulate this increased activity produced by the optical flash in our computational model of the basal ganglia (BG). This abstract model features the direct and indirect pathways commonly described in biology, and a reward prediction pathway (RP). The framework is similar to Actor-Critic methods and to the ventral/dorsal distinction in the striatum. We thus investigated the impact on the selection caused by an added stimulation in each of the three pathways. We were able to reproduce in our model the bias in action selection observed in mice. Our results also showed that biasing the reward prediction is sufficient to create a modification in the action selection. However, we had to increase the percentage of trials with stimulation relative to that in experiments in order to impact the selection. We found that increasing only the reward prediction had a different effect if the stimulation in RP was action dependent (only for a specific action) or not. We further looked at the evolution of the change in the weights depending on the stage of learning within a block. A bias in RP impacts the plasticity differently depending on that stage but also on the outcome. It remains to experimentally test how the dopaminergic neurons are affected by specific stimulations of neurons in the striatum and to relate data to predictions of our model. PMID:24614169

  12. Optogenetic stimulation in a computational model of the basal ganglia biases action selection and reward prediction error.

    PubMed

    Berthet, Pierre; Lansner, Anders

    2014-01-01

    Optogenetic stimulation of specific types of medium spiny neurons (MSNs) in the striatum has been shown to bias the selection of mice in a two choices task. This shift is dependent on the localisation and on the intensity of the stimulation but also on the recent reward history. We have implemented a way to simulate this increased activity produced by the optical flash in our computational model of the basal ganglia (BG). This abstract model features the direct and indirect pathways commonly described in biology, and a reward prediction pathway (RP). The framework is similar to Actor-Critic methods and to the ventral/dorsal distinction in the striatum. We thus investigated the impact on the selection caused by an added stimulation in each of the three pathways. We were able to reproduce in our model the bias in action selection observed in mice. Our results also showed that biasing the reward prediction is sufficient to create a modification in the action selection. However, we had to increase the percentage of trials with stimulation relative to that in experiments in order to impact the selection. We found that increasing only the reward prediction had a different effect if the stimulation in RP was action dependent (only for a specific action) or not. We further looked at the evolution of the change in the weights depending on the stage of learning within a block. A bias in RP impacts the plasticity differently depending on that stage but also on the outcome. It remains to experimentally test how the dopaminergic neurons are affected by specific stimulations of neurons in the striatum and to relate data to predictions of our model. PMID:24614169

  13. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods. [Combined Release and Radiation Effects Satellite

    NASA Technical Reports Server (NTRS)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-01-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  14. Real-time quality control of pipes using neural network prediction error signals for defect detection in time area

    NASA Astrophysics Data System (ADS)

    Akhmetshin, Alexander M.; Gvozdak, Andrey P.

    1999-08-01

    The magnetic-induction method of quality control of seamless pipes in real-time characterized by a high level of structural noises having the composite law of an elementary probability law varying from batch to a batch, of a varying form. The traditional method of a detection of defects of pipes is depend to usage of ethanol defects. However shape of actual defects is casual, that does not allow to use methods of an optimum filtration for their detection. Usage of adaptive variants of a Kalman filter not ensures the solutions of a problem of a detection because of poor velocity of adaptation and small relation a signal/the correlated noise. For the solution of a problem was used structural Adaptive Neuro-Fuzzy Inference System (ANFIS) which was trained by delivery of every possible variants of signals without defects of sites of pipes filed by transducer system. As an analyzable signal the error signal of the prognosis ANFIS was considered. The carried out experiments have shown, that the method allows to ooze a signal of casual extended defects even in situations when a signal-noise ratio was less unity and the traditional amplitudes methods of selection of signals of defects did not determine.

  15. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  16. ABSOLUTE POLARIMETRY AT RHIC.

    SciTech Connect

    OKADA; BRAVAR, A.; BUNCE, G.; GILL, R.; HUANG, H.; MAKDISI, Y.; NASS, A.; WOOD, J.; ZELENSKI, Z.; ET AL.

    2007-09-10

    Precise and absolute beam polarization measurements are critical for the RHIC spin physics program. Because all experimental spin-dependent results are normalized by beam polarization, the normalization uncertainty contributes directly to final physics uncertainties. We aimed to perform the beam polarization measurement to an accuracy Of {Delta}P{sub beam}/P{sub beam} < 5%. The absolute polarimeter consists of Polarized Atomic Hydrogen Gas Jet Target and left-right pairs of silicon strip detectors and was installed in the RHIC-ring in 2004. This system features proton-proton elastic scattering in the Coulomb nuclear interference (CNI) region. Precise measurements of the analyzing power A{sub N} of this process has allowed us to achieve {Delta}P{sub beam}/P{sub beam} = 4.2% in 2005 for the first long spin-physics run. In this report, we describe the entire set up and performance of the system. The procedure of beam polarization measurement and analysis results from 2004-2005 are described. Physics topics of AN in the CNI region (four-momentum transfer squared 0.001 < -t < 0.032 (GeV/c){sup 2}) are also discussed. We point out the current issues and expected optimum accuracy in 2006 and the future.

  17. Prospects for the Moon as an SI-Traceable Absolute Spectroradiometric Standard for Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Cramer, C. E.; Stone, T. C.; Lykke, K.; Woodward, J. T.

    2015-12-01

    The Earth's Moon has many physical properties that make it suitable for use as a reference light source for radiometric calibration of remote sensing satellite instruments. Lunar calibration has been successfully applied to many imagers in orbit, including both MODIS instruments and NPP-VIIRS, using the USGS ROLO model to predict the reference exoatmospheric lunar irradiance. Sensor response trending was developed for SeaWIFS with a relative accuracy better than 0.1 % per year with lunar calibration techniques. However, the Moon rarely is used as an absolute reference for on-orbit calibration, primarily due to uncertainties in the ROLO model absolute scale of 5%-10%. But this limitation lies only with the models - the Moon itself is radiometrically stable, and development of a high-accuracy absolute lunar reference is inherently feasible. A program has been undertaken by NIST to collect absolute measurements of the lunar spectral irradiance with absolute accuracy <1 % (k=2), traceable to SI radiometric units. Initial Moon observations were acquired from the Whipple Observatory on Mt. Hopkins, Arizona, elevation 2367 meters, with continuous spectral coverage from 380 nm to 1040 nm at ~3 nm resolution. The lunar spectrometer acquired calibration measurements several times each observing night by pointing to a calibrated integrating sphere source. The lunar spectral irradiance at the top of the atmosphere was derived from a time series of ground-based measurements by a Langley analysis that incorporated measured atmospheric conditions and ROLO model predictions for the change in irradiance resulting from the changing Sun-Moon-Observer geometry throughout each night. Two nights were selected for further study. An extensive error analysis, which includes instrument calibration and atmospheric correction terms, shows a combined standard uncertainty under 1 % over most of the spectral range. Comparison of these two nights' spectral irradiance measurements with predictions

  18. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  19. Using Air Temperature to Quantitatively Predict the MODIS Fractional Snow Cover Retrieval Errors over the Continental US (CONUS)

    NASA Technical Reports Server (NTRS)

    Dong, Jiarui; Ek, Mike; Hall, Dorothy K.; Peters-Lidard, Christa; Cosgrove, Brian; Miller, Jeff; Riggs, George A.; Xia, Youlong

    2013-01-01

    In the middle to high latitude and alpine regions, the seasonal snow pack can dominate the surface energy and water budgets due to its high albedo, low thermal conductivity, high emissivity, considerable spatial and temporal variability, and ability to store and then later release a winters cumulative snowfall (Cohen, 1994; Hall, 1998). With this in mind, the snow drought across the U.S. has raised questions about impacts on water supply, ski resorts and agriculture. Knowledge of various snow pack properties is crucial for short-term weather forecasts, climate change prediction, and hydrologic forecasting for producing reliable daily to seasonal forecasts. One potential source of this information is the multi-institution North American Land Data Assimilation System (NLDAS) project (Mitchell et al., 2004). Real-time NLDAS products are used for drought monitoring to support the National Integrated Drought Information System (NIDIS) and as initial conditions for a future NCEP drought forecast system. Additionally, efforts are currently underway to assimilate remotely-sensed estimates of land-surface states such as snowpack information into NLDAS. It is believed that this assimilation will not only produce improved snowpack states that better represent snow evolving conditions, but will directly improve the monitoring of drought.

  20. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record PMID:26478959

  1. Absolute calibration in vivo measurement systems

    SciTech Connect

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs.

  2. Implants as absolute anchorage.

    PubMed

    Rungcharassaeng, Kitichai; Kan, Joseph Y K; Caruso, Joseph M

    2005-11-01

    Anchorage control is essential for successful orthodontic treatment. Each tooth has its own anchorage potential as well as propensity to move when force is applied. When teeth are used as anchorage, the untoward movements of the anchoring units may result in the prolonged treatment time, and unpredictable or less-than-ideal outcome. To maximize tooth-related anchorage, techniques such as differential torque, placing roots into the cortex of the bone, the use of various intraoral devices and/or extraoral appliances have been implemented. Implants, as they are in direct contact with bone, do not possess a periodontal ligament. As a result, they do not move when orthodontic/orthopedic force is applied, and therefore can be used as "absolute anchorage." This article describes different types of implants that have been used as orthodontic anchorage. Their clinical applications and limitations are also discussed. PMID:16463910

  3. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  4. Possible sources of forecast errors generated by the global/regional assimilation and prediction system for landfalling tropical cyclones. Part I: Initial uncertainties

    NASA Astrophysics Data System (ADS)

    Zhou, Feifan; Yamaguchi, Munehiko; Qin, Xiaohao

    2016-07-01

    This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made for 16 landfalling TCs in the western North Pacific basin during the 2008 and 2009 seasons, with a forecast length of 72 hours, and using the default initial conditions ("initials", hereafter), which are from the NCEP-FNL dataset, as well as ECMWF initials. The forecasts are compared with ECMWF forecasts. The results show that in most TCs, the GRAPES forecasts are improved when using the ECMWF initials compared with the default initials. Compared with the ECMWF initials, the default initials produce lower intensity TCs and a lower intensity subtropical high, but a higher intensity South Asia high and monsoon trough, as well as a higher temperature but lower specific humidity at the TC center. Replacement of the geopotential height and wind fields with the ECMWF initials in and around the TC center at the initial time was found to be the most efficient way to improve the forecasts. In addition, TCs that showed the greatest improvement in forecast accuracy usually had the largest initial uncertainties in TC intensity and were usually in the intensifying phase. The results demonstrate the importance of the initial intensity for TC track forecasts made using GRAPES, and indicate the model is better in describing the intensifying phase than the decaying phase of TCs. Finally, the limit of the improvement indicates that the model error associated with GRAPES forecasts may be the main cause of poor forecasts of landfalling TCs. Thus, further examinations of the model errors are required.

  5. Analysis of nodalization effects on the prediction error of generalized finite element method used for dynamic modeling of hot water storage tank

    NASA Astrophysics Data System (ADS)

    Wołowicz, Marcin; Kupecki, Jakub; Wawryniuk, Katarzyna; Milewski, Jarosław; Motyliński, Konrad

    2015-09-01

    The paper presents dynamic model of hot water storage tank. The literature review has been made. Analysis of effects of nodalization on the prediction error of generalized finite element method (GFEM) is provided. The model takes into account eleven various parameters, such as: flue gases volumetric flow rate to the spiral, inlet water temperature, outlet water flow rate, etc. Boiler is also described by sizing parameters, nozzle parameters and heat loss including ambient temperature. The model has been validated on existing data. Adequate laboratory experiments were provided. The comparison between 1-, 5-, 10- and 50-zone boiler is presented. Comparison between experiment and simulations for different zone numbers of the boiler model is presented on the plots. The reason of differences between experiment and simulation is explained.

  6. The initial errors that induce a significant "spring predictability barrier" for El Niño events and their implications for target observation: results from an earth system model

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Hu, Junya

    2015-08-01

    The National Center for Atmospheric Research Community Earth System Model is used to study the "spring predictability barrier" (SPB) problem for El Niño events from the perspective of initial error growth. By conducting perfect model predictability experiments, we obtain two types of initial sea temperature errors, which often exhibit obvious season-dependent evolution and cause a significant SPB when predicting the onset of El Niño events bestriding spring. One type of initial errors possesses a sea surface temperature anomaly (SSTA) pattern with negative anomalies in the central-eastern equatorial Pacific, plus a basin-wide dipolar subsurface temperature anomaly pattern with negative anomalies in the upper layers of the eastern equatorial Pacific and positive anomalies in the lower layers of the western equatorial Pacific. The other type consists of an SSTA component with positive anomalies over the southeastern equatorial Pacific, plus a large-scale zonal dipole pattern of the subsurface temperature anomaly with positive anomalies in the upper layers of the eastern equatorial Pacific and negative anomalies in the lower layers of the central-western equatorial Pacific. Both exhibit a La Niña-like evolving mode and cause an under-prediction for Niño-3 SSTA of El Niño events. For the former initial error type, the resultant prediction errors grow in a manner similar to the behavior of the growth phase of La Niña; while for the latter initial error type, they experience a process that is similar to El Niño decay and transition to a La Niña growth phase. Both two types of initial errors cause negative prediction errors of Niño-3 SSTA for El Niño events. The prediction errors for Niño-3 SSTA are mainly due to the contribution of initial sea temperature errors in the large-error-related regions in the upper layers of the eastern tropical Pacific and/or in the lower layers of the western tropical Pacific. These regions may represent ``sensitive areas'' for El

  7. The initial errors that induce a significant "spring predictability barrier" for El Niño events and their implications for target observation: results from an earth system model

    NASA Astrophysics Data System (ADS)

    Hu, Junya; Duan, Wansuo

    2016-04-01

    The National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) is used to study the "spring predictability barrier" (SPB) problem for El Niño events from the perspective of initial error growth. By conducting perfect model predictability experiments, we obtain two types of initial sea temperature errors, which often exhibit obvious season-dependent evolution and cause a significant SPB when predicting the onset of El Niño events bestriding spring. One type of initial errors possesses a sea surface temperature anomaly (SSTA) pattern with negative anomalies in the central-eastern equatorial Pacific, plus a basin-wide dipolar subsurface temperature anomaly pattern with negative anomalies in the upper layers of the eastern equatorial Pacific and positive anomalies in the lower layers of the western equatorial Pacific. The other type consists of an SSTA component with positive anomalies over the southeastern equatorial Pacific, plus a large-scale zonal dipole pattern of the subsurface temperature anomaly with positive anomalies in the upper layers of the eastern equatorial Pacific and negative anomalies in the lower layers of the central-western equatorial Pacific. Both exhibit a La Niña-like evolving mode and cause an under-prediction for Niño-3 SSTA of El Niño events. For the former initial error type, the resultant prediction errors grow in a manner similar to the behavior of the growth phase of La Niña; while for the latter initial error type, they experience a process that is similar to El Niño decay and transition to a La Niña growth phase. Both two types of initial errors cause negative prediction errors of Niño-3 SSTA for El Niño events. The prediction errors for Niño-3 SSTA are mainly due to the contribution of initial sea temperature errors in the large-error-related regions in the upper layers of the eastern tropical Pacific and/or in the lower layers of the western tropical Pacific. These regions may represent ''sensitive

  8. The initial errors that induce a significant "spring predictability barrier" for El Niño events and their implications for target observation: results from an earth system model

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Hu, Junya

    2016-06-01

    The National Center for Atmospheric Research Community Earth System Model is used to study the "spring predictability barrier" (SPB) problem for El Niño events from the perspective of initial error growth. By conducting perfect model predictability experiments, we obtain two types of initial sea temperature errors, which often exhibit obvious season-dependent evolution and cause a significant SPB when predicting the onset of El Niño events bestriding spring. One type of initial errors possesses a sea surface temperature anomaly (SSTA) pattern with negative anomalies in the central-eastern equatorial Pacific, plus a basin-wide dipolar subsurface temperature anomaly pattern with negative anomalies in the upper layers of the eastern equatorial Pacific and positive anomalies in the lower layers of the western equatorial Pacific. The other type consists of an SSTA component with positive anomalies over the southeastern equatorial Pacific, plus a large-scale zonal dipole pattern of the subsurface temperature anomaly with positive anomalies in the upper layers of the eastern equatorial Pacific and negative anomalies in the lower layers of the central-western equatorial Pacific. Both exhibit a La Niña-like evolving mode and cause an under-prediction for Niño-3 SSTA of El Niño events. For the former initial error type, the resultant prediction errors grow in a manner similar to the behavior of the growth phase of La Niña; while for the latter initial error type, they experience a process that is similar to El Niño decay and transition to a La Niña growth phase. Both two types of initial errors cause negative prediction errors of Niño-3 SSTA for El Niño events. The prediction errors for Niño-3 SSTA are mainly due to the contribution of initial sea temperature errors in the large-error-related regions in the upper layers of the eastern tropical Pacific and/or in the lower layers of the western tropical Pacific. These regions may represent ``sensitive areas'' for El

  9. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  10. Spatially resolved absolute spectrophotometry of Saturn - 3390 to 8080 A

    NASA Technical Reports Server (NTRS)

    Bergstralh, J. T.; Diner, D. J.; Baines, K. H.; Neff, J. S.; Allen, M. A.; Orton, G. S.

    1981-01-01

    A series of spatially resolved absolute spectrophotometric measurements of Saturn was conducted for the expressed purpose of calibrating the data obtained with the Imaging Photopolarimeter (IPP) on Pioneer 11 during its recent encounter with Saturn. All observations reported were made at the Mt. Wilson 1.5-m telescope, using a 1-m Ebert-Fastie scanning spectrometer. Spatial resolution was 1.92 arcsec. Photometric errors are considered, taking into account the fixed error, the variable error, and the composite error. The results are compared with earlier observations, as well as with synthetic spectra derived from preliminary physical models, giving attention to the equatorial region and the South Temperate Zone.

  11. Absolute neutrino mass measurements

    NASA Astrophysics Data System (ADS)

    Wolf, Joachim

    2011-10-01

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2β) searches, single β-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy. Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium β-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope (137Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R&D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2β decay and single β-decay.

  12. Absolute neutrino mass measurements

    SciTech Connect

    Wolf, Joachim

    2011-10-06

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.

  13. An absolute radius scale for Saturn's rings

    NASA Technical Reports Server (NTRS)

    Nicholson, Philip D.; Cooke, Maren L.; Pelton, Emily

    1990-01-01

    Radio and stellar occultation observations of Saturn's rings made by the Voyager spacecraft are discussed. The data reveal systematic discrepancies of almost 10 km in some parts of the rings, limiting some of the investigations. A revised solution for Saturn's rotation pole has been proposed which removes the discrepancies between the stellar and radio occultation profiles. Corrections to previously published radii vary from -2 to -10 km for the radio occultation, and +5 to -6 km for the stellar occultation. An examination of spiral density waves in the outer A Ring supports that the revised absolute radii are in error by no more than 2 km.

  14. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  15. Learning in the temporal bisection task: Relative or absolute?

    PubMed

    de Carvalho, Marilia Pinheiro; Machado, Armando; Tonneau, François

    2016-01-01

    We examined whether temporal learning in a bisection task is absolute or relational. Eight pigeons learned to choose a red key after a t-seconds sample and a green key after a 3t-seconds sample. To determine whether they had learned a relative mapping (short→Red, long→Green) or an absolute mapping (t-seconds→Red, 3t-seconds→Green), the pigeons then learned a series of new discriminations in which either the relative or the absolute mapping was maintained. Results showed that the generalization gradient obtained at the end of a discrimination predicted the pattern of choices made during the first session of a new discrimination. Moreover, most acquisition curves and generalization gradients were consistent with the predictions of the learning-to-time model, a Spencean model that instantiates absolute learning with temporal generalization. In the bisection task, the basis of temporal discrimination seems to be absolute, not relational. PMID:26752233

  16. Absolute Identification by Relative Judgment

    ERIC Educational Resources Information Center

    Stewart, Neil; Brown, Gordon D. A.; Chater, Nick

    2005-01-01

    In unidimensional absolute identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of absolute magnitudes. The authors propose an alternative…

  17. Be Resolute about Absolute Value

    ERIC Educational Resources Information Center

    Kidd, Margaret L.

    2007-01-01

    This article explores how conceptualization of absolute value can start long before it is introduced. The manner in which absolute value is introduced to students in middle school has far-reaching consequences for their future mathematical understanding. It begins to lay the foundation for students' understanding of algebra, which can change…

  18. Individual differences in reward prediction error: contrasting relations between feedback-related negativity and trait measures of reward sensitivity, impulsivity and extraversion

    PubMed Central

    Cooper, Andrew J.; Duke, Éilish; Pickering, Alan D.; Smillie, Luke D.

    2014-01-01

    Medial-frontal negativity occurring ∼200–300 ms post-stimulus in response to motivationally salient stimuli, usually referred to as feedback-related negativity (FRN), appears to be at least partly modulated by dopaminergic-based reward prediction error (RPE) signaling. Previous research (e.g., Smillie et al., 2011) has shown that higher scores on a putatively dopaminergic-based personality trait, extraversion, were associated with a more pronounced difference wave contrasting unpredicted non-reward and unpredicted reward trials on an associative learning task. In the current study, we sought to extend this research by comparing how trait measures of reward sensitivity, impulsivity and extraversion related to the FRN using the same associative learning task. A sample of healthy adults (N = 38) completed a battery of personality questionnaires, before completing the associative learning task while EEG was recorded. As expected, FRN was most negative following unpredicted non-reward. A difference wave contrasting unpredicted non-reward and unpredicted reward trials was calculated. Extraversion, but not measures of impulsivity, had a significant association with this difference wave. Further, the difference wave was significantly related to a measure of anticipatory pleasure, but not consummatory pleasure. These findings provide support for the existing evidence suggesting that variation in dopaminergic functioning in brain “reward” pathways may partially underpin associations between the FRN and trait measures of extraversion and anticipatory pleasure. PMID:24808845

  19. Absolute surface metrology by rotational averaging in oblique incidence interferometry.

    PubMed

    Lin, Weihao; He, Yumei; Song, Li; Luo, Hongxin; Wang, Jie

    2014-06-01

    A modified method for measuring the absolute figure of a large optical flat surface in synchrotron radiation by a small aperture interferometer is presented. The method consists of two procedures: the first step is oblique incidence measurement; the second is multiple rotating measurements. This simple method is described in terms of functions that are symmetric or antisymmetric with respect to reflections at the vertical axis. Absolute deviations of a large flat surface could be obtained when mirror antisymmetric errors are removed by N-position rotational averaging. Formulas are derived for measuring the absolute surface errors of a rectangle flat, and experiments on high-accuracy rectangle flats are performed to verify the method. Finally, uncertainty analysis is carried out in detail. PMID:24922410

  20. The Implications for Higher-Accuracy Absolute Measurements for NGS and its GRAV-D Project

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Winester, D.; Roman, D. R.; Eckl, M. C.; Smith, D. A.

    2013-12-01

    Absolute and relative gravity measurements play an important role in the work of NOAA's National Geodetic Survey (NGS). When NGS decided to replace the US national vertical datum, the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project added a new dimension to the NGS gravity program. Airborne gravity collection would complement existing satellite and surface gravity data to allow the creation of a gravimetric geoid sufficiently accurate to form the basis of the new reference surface. To provide absolute gravity ties for the airborne surveys, initially new FG5 absolute measurements were made at existing absolute stations and relative measurements were used to transfer those measurements to excenters near the absolute mark and to the aircraft sensor height at the parking space. In 2011, NGS obtained a field-capable A10 absolute gravimeter from Micro-g LaCoste which became the basis of the support of the airborne surveys. Now A10 measurements are made at the aircraft location and transferred to sensor height. Absolute and relative gravity play other roles in GRAV-D. Comparison of surface data with new airborne collection will highlight surface surveys with bias or tilt errors and can provide enough information to repair or discard the data. We expect that areas of problem surface data may be re-measured. The GRAV-D project also plans to monitor the geoid in regions of rapid change and update the vertical datum when appropriate. Geoid change can result from glacial isostatic adjustment (GIA), tectonic change, and the massive drawdown of large scale aquifers. The NGS plan for monitoring these changes over time is still in its preliminary stages and is expected to rely primarily on the GRACE and GRACE Follow On satellite data in conjunction with models of GIA and tectonic change. We expect to make absolute measurements in areas of rapid change in order to verify model predictions. With the opportunities presented by rapid, highly accurate

  1. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  2. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  3. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  4. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  5. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  6. Absolute Radiometer for Reproducing the Solar Irradiance Unit

    NASA Astrophysics Data System (ADS)

    Sapritskii, V. I.; Pavlovich, M. N.

    1989-01-01

    A high-precision absolute radiometer with a thermally stabilized cavity as receiving element has been designed for use in solar irradiance measurements. The State Special Standard of the Solar Irradiance Unit has been built on the basis of the developed absolute radiometer. The Standard also includes the sun tracking system and the system for automatic thermal stabilization and information processing, comprising a built-in microcalculator which calculates the irradiance according to the input program. During metrological certification of the Standard, main error sources have been analysed and the non-excluded systematic and accidental errors of the irradiance-unit realization have been determined. The total error of the Standard does not exceed 0.3%. Beginning in 1984 the Standard has been taking part in a comparison with the Å 212 pyrheliometer and other Soviet and foreign standards. In 1986 it took part in the international comparison of absolute radiometers and standard pyrheliometers of socialist countries. The results of the comparisons proved the high metrological quality of this Standard based on an absolute radiometer.

  7. Phase errors in high line density CGH used for aspheric testing: beyond scalar approximation.

    PubMed

    Peterhänsel, S; Pruss, C; Osten, W

    2013-05-20

    One common way to measure asphere and freeform surfaces is the interferometric Null test, where a computer generated hologram (CGH) is placed in the object path of the interferometer. If undetected phase errors are present in the CGH, the measurement will show systematic errors. Therefore the absolute phase of this element has to be known. This phase is often calculated using scalar diffraction theory. In this paper we discuss the limitations of this theory for the prediction of the absolute phase generated by different implementations of CGH. Furthermore, for regions where scalar approximation is no longer valid, rigorous simulations are performed to identify phase sensitive structure parameters and evaluate fabrication tolerances for typical gratings. PMID:23736387

  8. Absolute absorption on the potassium D lines: theory and experiment

    NASA Astrophysics Data System (ADS)

    Hanley, Ryan K.; Gregory, Philip D.; Hughes, Ifan G.; Cornish, Simon L.

    2015-10-01

    We present a detailed study of the absolute Doppler-broadened absorption of a probe beam scanned across the potassium D lines in a thermal vapour. Spectra using a weak probe were measured on the 4S \\to 4P transition and compared to the theoretical model of the electric susceptibility detailed by Zentile et al (2015 Comput. Phys. Commun. 189 162-74) in the code named ElecSus. Comparisons were also made on the 4S \\to 5P transition with an adapted version of ElecSus. This is the first experimental test of ElecSus on an atom with a ground state hyperfine splitting smaller than that of the Doppler width. An excellent agreement was found between ElecSus and experimental measurements at a variety of temperatures with rms errors ˜ {10}-3. We have also demonstrated the use of ElecSus as an atomic vapour thermometry tool, and present a possible new measurement technique of transition decay rates which we predict to have a precision of ˜3 {kHz}.

  9. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  10. Absolute geostrophic currents in global tropical oceans

    NASA Astrophysics Data System (ADS)

    Yang, Lina; Yuan, Dongliang

    2016-03-01

    A set of absolute geostrophic current (AGC) data for the period January 2004 to December 2012 are calculated using the P-vector method based on monthly gridded Argo profiles in the world tropical oceans. The AGCs agree well with altimeter geostrophic currents, Ocean Surface Current Analysis-Real time currents, and moored current-meter measurements at 10-m depth, based on which the classical Sverdrup circulation theory is evaluated. Calculations have shown that errors of wind stress calculation, AGC transport, and depth ranges of vertical integration cannot explain non-Sverdrup transport, which is mainly in the subtropical western ocean basins and equatorial currents near the Equator in each ocean basin (except the North Indian Ocean, where the circulation is dominated by monsoons). The identified non-Sverdrup transport is thereby robust and attributed to the joint effect of baroclinicity and relief of the bottom (JEBAR) and mesoscale eddy nonlinearity.

  11. Stitching interferometry: recent results and absolute calibration

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2004-02-01

    Stitching Interferometry is a method of analysing large optical components using a standard "small" interferometer. This result is obtained by taking multiple overlapping images of the large component, and numerically "stitching" these sub-apertures together. We have already reported the industrial use our Stitching Interferometry systems (Previous SPIE symposia), but experimental results had been lacking because this technique is still new, and users needed to get accustomed to it before producing reliable measurements. We now have more results. We will report user comments and show new, unpublished results. We will discuss sources of error, and show how some of these can be reduced to arbitrarily small values. These will be discussed in some detail. We conclude with a few graphical examples of absolute measurements performed by us.

  12. Blood pressure targets and absolute cardiovascular risk.

    PubMed

    Odutayo, Ayodele; Rahimi, Kazem; Hsiao, Allan J; Emdin, Connor A

    2015-08-01

    In the Eighth Joint National Committee guideline on hypertension, the threshold for the initiation of blood pressure-lowering treatment for elderly adults (≥60 years) without chronic kidney disease or diabetes mellitus was raised from 140/90 mm Hg to 150/90 mm Hg. However, the committee was not unanimous in this decision, particularly because a large proportion of adults ≥60 years may be at high cardiovascular risk. On the basis of Eighth Joint National Committee guideline, we sought to determine the absolute 10-year risk of cardiovascular disease among these adults through analyzing the National Health and Nutrition Examination Survey (2005-2012). The primary outcome measure was the proportion of adults who were at ≥20% predicted absolute cardiovascular risk and above goals for the Seventh Joint National Committee guideline but reclassified as at target under the Eighth Joint National Committee guideline (reclassified). The Framingham General Cardiovascular Disease Risk Score was used. From 2005 to 2012, the surveys included 12 963 adults aged 30 to 74 years with blood pressure measurements, of which 914 were reclassified based on the guideline. Among individuals reclassified as not in need of additional treatment, the proportion of adults 60 to 74 years without chronic kidney disease or diabetes mellitus at ≥20% absolute risk was 44.8%. This corresponds to 0.8 million adults. The proportion at high cardiovascular risk remained sizable among adults who were not receiving blood pressure-lowering treatment. Taken together, a sizable proportion of reclassified adults 60 to 74 years without chronic kidney disease or diabetes mellitus was at ≥20% absolute cardiovascular risk. PMID:26056340

  13. A Simple Model Predicting Individual Weight Change in Humans.

    PubMed

    Thomas, Diana M; Martin, Corby K; Heymsfield, Steven; Redman, Leanne M; Schoeller, Dale A; Levine, James A

    2011-11-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants' weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  14. A Simple Model Predicting Individual Weight Change in Humans

    PubMed Central

    Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.

    2010-01-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  15. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

    NASA Astrophysics Data System (ADS)

    Amoush, Ahmad

    The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

  16. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  17. Prospective errors determine motor learning.

    PubMed

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model's novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  18. Absolute Stability Analysis of a Phase Plane Controlled Spacecraft

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Plummer, Michael; Bedrossian, Nazareth; Hall, Charles; Jackson, Mark; Spanos, Pol

    2010-01-01

    Many aerospace attitude control systems utilize phase plane control schemes that include nonlinear elements such as dead zone and ideal relay. To evaluate phase plane control robustness, stability margin prediction methods must be developed. Absolute stability is extended to predict stability margins and to define an abort condition. A constrained optimization approach is also used to design flex filters for roll control. The design goal is to optimize vehicle tracking performance while maintaining adequate stability margins. Absolute stability is shown to provide satisfactory stability constraints for the optimization.

  19. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  20. Mathematical Model for Absolute Magnetic Measuring Systems in Industrial Applications

    NASA Astrophysics Data System (ADS)

    Fügenschuh, Armin; Fügenschuh, Marzena; Ludszuweit, Marina; Mojsic, Aleksandar; Sokół, Joanna

    2015-09-01

    Scales for measuring systems are either based on incremental or absolute measuring methods. Incremental scales need to initialize a measurement cycle at a reference point. From there, the position is computed by counting increments of a periodic graduation. Absolute methods do not need reference points, since the position can be read directly from the scale. The positions on the complete scales are encoded using two incremental tracks with different graduation. We present a new method for absolute measuring using only one track for position encoding up to micrometre range. Instead of the common perpendicular magnetic areas, we use a pattern of trapezoidal magnetic areas, to store more complex information. For positioning, we use the magnetic field where every position is characterized by a set of values measured by a hall sensor array. We implement a method for reconstruction of absolute positions from the set of unique measured values. We compare two patterns with respect to uniqueness, accuracy, stability and robustness of positioning. We discuss how stability and robustness are influenced by different errors during the measurement in real applications and how those errors can be compensated.

  1. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    SciTech Connect

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  2. Absolute optical surface measurement with deflectometry

    NASA Astrophysics Data System (ADS)

    Li, Wansong; Sandner, Marc; Gesierich, Achim; Burke, Jan

    Deflectometry utilises the deformation and displacement of a sample pattern after reflection from a test surface to infer the surface slopes. Differentiation of the measurement data leads to a curvature map, which is very useful for surface quality checks with sensitivity down to the nanometre range. Integration of the data allows reconstruction of the absolute surface shape, but the procedure is very error-prone because systematic errors may add up to large shape deviations. In addition, there are infinitely many combinations for slope and object distance that satisfy a given observation. One solution for this ambiguity is to include information on the object's distance. It must be known very accurately. Two laser pointers can be used for positioning the object, and we also show how a confocal chromatic distance sensor can be used to define a reference point on a smooth surface from which the integration can be started. The used integration algorithm works without symmetry constraints and is therefore suitable for free-form surfaces as well. Unlike null testing, deflectometry also determines radius of curvature (ROC) or focal lengths as a direct result of the 3D surface reconstruction. This is shown by the example of a 200 mm diameter telescope mirror, whose ROC measurements by coordinate measurement machine and deflectometry coincide to within 0.27 mm (or a sag error of 1.3μm). By the example of a diamond-turned off-axis parabolic mirror, we demonstrate that the figure measurement uncertainty comes close to a well-calibrated Fizeau interferometer.

  3. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  4. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  5. Clinical review: Medication errors in critical care

    PubMed Central

    Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas

    2008-01-01

    Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883

  6. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  7. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  8. Evaluation of errors and limits of the 63-μm house-dust-fraction method, a surrogate to predict hidden moisture damage

    PubMed Central

    Baudisch, Christoph; Assadian, Ojan; Kramer, Axel

    2009-01-01

    Background The aim of this study is to analyze possible random and systematic measurement errors and to detect methodological limits of the previously established method. Findings To examine the distribution of random errors (repeatability standard deviation) of the detection procedure, collective samples were taken from two uncontaminated rooms using a sampling vacuum cleaner, and 10 sub-samples each were examined with 3 parallel cultivation plates (DG18). In this two collective samples of new dust, the total counts of Aspergillus spp. varied moderately by 25 and 29% (both 9 cfu per plate). At an average of 28 cfu/plate, the total number varied only by 13%. For the evaluation of the influence of old dust, old and fresh dust samples were examined. In both cases with old dust, the old dust influenced the results indicating false positive results, where hidden moist was indicated but was not present. To quantify the influence of sand and sieving, 13 sites were sampled in parallel using the 63-μm- and total dust collection approaches. Sieving to 63-μm resulted in a more then 10-fold enrichment, due to the different quantity of inert sand in each total dust sample. Conclusion The major errors during the quantitative evaluation from house dust samples for mould fungi as reference values for assessment resulted from missing filtration, contamination with old dust and the massive influence of soil. If the assessment is guided by indicator genera, the percentage standard deviation lies in a moderate range. PMID:19852825

  9. Toward a cognitive taxonomy of medical errors.

    PubMed Central

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions. PMID:12463962

  10. Individual Differences in Absolute and Relative Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Maki, Ruth H.; Shields, Micheal; Wheeler, Amanda Easton; Zacchilli, Tammy Lowery

    2005-01-01

    The authors investigated absolute and relative metacomprehension accuracy as a function of verbal ability in college students. Students read hard texts, revised texts, or a mixed set of texts. They then predicted their performance, took a multiple-choice test on the texts, and made posttest judgments about their performance. With hard texts,…

  11. Atmospheric Predictability: Why Butterflies Are Not Important

    NASA Astrophysics Data System (ADS)

    Durran, D. R.; Gingrich, M.

    2014-12-01

    The spectral turbulence model of Lorenz, as modified for surface quasi-geostrophic dynamics by Rotunno and Snyder, is further modified to more smoothly approach nonlinear saturation. This model is used to investigate error growth starting from different distributions of the initial error. Consistent with an often overlooked finding by Lorenz, the loss of predictability generated by initial errors of small but fixed absolute magnitude is essentially independent of their spatial scale when the background saturation kinetic energy spectrum is proportional the -5/3 power of the wavenumber. Thus, because the background kinetic energy increases with scale, very small relative errors at long wavelengths have similar impacts on perturbation error growth as large relative errors at short wavelengths. To the extent that this model applies to practical meteorological forecasts, the influence of initial perturbations generated by butterflies would be swamped by unavoidable tiny relative errors in the large scales. The rough applicability of our modified spectral turbulence model to the atmosphere over scales ranging between 10 km and 1000 km is supported by the good estimate it provides for the ensemble error growth in state-of-the-art ensemble mesoscale-model simulations of two winter storms. The initial error spectrum for the ensemble perturbations in these cases has maximum power at the longest wavelengths. The dominance of large-scale errors in the ensemble suggests that mesoscale weather forecasts may often be limited by errors arising from the large-scales instead of being produced solely through an upscale cascade from the smallest scales. These results imply the predictability of small-scale features in the vicinity of topography may be shorter than currently supposed.

  12. The AFGL absolute gravity program

    NASA Technical Reports Server (NTRS)

    Hammond, J. A.; Iliff, R. L.

    1978-01-01

    A brief discussion of the AFGL's (Air Force Geophysics Laboratory) program in absolute gravity is presented. Support of outside work and in-house studies relating to gravity instrumentation are discussed. A description of the current transportable system is included and the latest results are presented. These results show good agreement with measurements at the AFGL site by an Italian system. The accuracy obtained by the transportable apparatus is better than 0.1 microns sq sec 10 microgal and agreement with previous measurements is within the combined uncertainties of the measurements.

  13. Absolute intensity and polarization of rotational Raman scattering from N2, O2, and CO2

    NASA Technical Reports Server (NTRS)

    Penney, C. M.; St.peters, R. L.; Lapp, M.

    1973-01-01

    An experimental examination of the absolute intensity, polarization, and relative line intensities of rotational Raman scattering (RRS) from N2, O2, and CO2 is reported. The absolute scattering intensity for N2 is characterized by its differential cross section for backscattering of incident light at 647.1 nm, which is calculated from basic measured values. The ratio of the corresponding cross section for O2 to that for N2 is 2.50 plus or minus 5 percent. The intensity recent for N2, O2, and CO2 are shown to compare favorably to values calculated from recent measurements of the depolarization of Rayleigh scattering plus RRS. Measured depolarizations of various RRS lines agree to within a few percent with the theoretical value of 3/4. Detailed error analyses are presented for intensity and depolarization measurements. Finally, extensive RRS spectra at nominal gas temperatures of 23 C, 75 C, and 125 C are presented and shown to compare favorably to theoretical predictions.

  14. Absolute calibration of forces in optical tweezers

    NASA Astrophysics Data System (ADS)

    Dutra, R. S.; Viana, N. B.; Maia Neto, P. A.; Nussenzveig, H. M.

    2014-07-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past 15 years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spot, adapting frequently employed video microscopy techniques. Combined with interface spherical aberration, it reveals a previously unknown window of instability for trapping. Comparison with experimental data leads to an overall agreement within error bars, with no fitting, for a broad range of microsphere radii, from the Rayleigh regime to the ray optics one, for different polarizations and trapping heights, including all commonly employed parameter domains. Besides signaling full first-principles theoretical understanding of optical tweezers operation, the results may lead to improved instrument design and control over experiments, as well as to an extended domain of applicability, allowing reliable force measurements, in principle, from femtonewtons to nanonewtons.

  15. Absolute Timing Calibration of the USA Experiment Using Pulsar Observations

    NASA Astrophysics Data System (ADS)

    Ray, P. S.; Wood, K. S.; Wolff, M. T.; Lovellette, M. N.; Sheikh, S.; Moon, D.-S.; Eikenberry, S. S.; Roberts, M.; Lyne, A.; Jordon, C.; Bloom, E. D.; Tournear, D.; Saz Parkinson, P.; Reilly, K.

    2003-03-01

    We update the status of the absolute time calibration of the USA Experiment as determined by observations of X-ray emitting rotation-powered pulsars. The brightest such source is the Crab Pulsar and we have obtained observations of the Crab at radio, IR, optical, and X-ray wavelengths. We directly compare arrival time determinations for 2--10 keV X-ray observations made contemporaneously with the PCA on the Rossi X-ray Timing Explorer and the USA Experiment on ARGOS. These two X-ray measurements employ very different means of measuring time and satellite position and thus have different systematic error budgets. The comparison with other wavelengths requires additional steps such as dispersion measure corrections and a precise definition of the ``peak'' of the light curve since the light curve shape varies with observing wavelength. We will describe each of these effects and quantify the magnitude of the systematic error that each may contribute. We will also include time comparison results for other pulsars, such as PSR B1509-58 and PSR B1821-24. Once the absolute time calibrations are well understood, comparing absolute arrival times at multiple energies can provide clues to the magnetospheric structure and emission region geometry. Basic research on X-ray Astronomy at NRL is funded by NRL/ONR.

  16. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  17. Drifting from Slow to "D'oh!": Working Memory Capacity and Mind Wandering Predict Extreme Reaction Times and Executive Control Errors

    ERIC Educational Resources Information Center

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Suss, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects' WMC and their performance and mind-wandering rates during…

  18. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < ‑1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  19. Digital spatial data for observed, predicted, and misclassification errors for observations in the training dataset for nitrate and arsenic concentrations in basin-fill aquifers in the Southwest Principal Aquifers study area

    USGS Publications Warehouse

    McKinney, Tim S.; Anning, David W.

    2012-01-01

    This product "Digital spatial data for observed, predicted, and misclassification errors for observations in the training dataset for nitrate and arsenic concentrations in basin-fill aquifers in the Southwest Principal Aquifers study area" is a 1:250,000-scale point spatial dataset developed as part of a regional Southwest Principal Aquifers (SWPA) study (Anning and others, 2012). The study examined the vulnerability of basin-fill aquifers in the southwestern United States to nitrate contamination and arsenic enrichment. Statistical models were developed by using the random forest classifier algorithm to predict concentrations of nitrate and arsenic across a model grid that represents local- and basin-scale measures of source, aquifer susceptibility, and geochemical conditions.

  20. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  1. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  2. Sounding rocket measurement of the absolute solar EUV flux utilizing a silicon photodiode

    NASA Technical Reports Server (NTRS)

    Ogawa, H. S.; Mcmullin, D.; Judge, D. L.; Canfield, L. R.

    1990-01-01

    A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated absolute magnitude of the solar extreme UV photon flux in the spectral region between 50 and 800 A. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated absolute value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 x 10 to the 10th photons/sq cm per s. Based on a nominal probable error of 7 percent for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-A region (5 percent on longer wavelength measurements between 500 and 1216 A), and based on experimental errors associated with the present rocket instrumentation and analysis, a conservative total error estimate of about 14 percent is assigned to the absolute integral solar flux obtained.

  3. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  4. Flow rate calibration for absolute cell counting rationale and design.

    PubMed

    Walker, Clare; Barnett, David

    2006-05-01

    There is a need for absolute leukocyte enumeration in the clinical setting, and accurate, reliable (and affordable) technology to determine absolute leukocyte counts has been developed. Such technology includes single platform and dual platform approaches. Derivations of these counts commonly incorporate the addition of a known number of latex microsphere beads to a blood sample, although it has been suggested that the addition of beads to a sample may only be required to act as an internal quality control procedure for assessing the pipetting error. This unit provides the technical details for undertaking flow rate calibration that obviates the need to add reference beads to each sample. It is envisaged that this report will provide the basis for subsequent clinical evaluations of this novel approach. PMID:18770842

  5. Pole coordinates data prediction by combination of least squares extrapolation and double autoregressive prediction

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw

    2016-04-01

    Future Earth Orientation Parameters data are needed to compute real time transformation between the celestial and terrestrial reference frames. This transformation is realized by predictions of x, y pole coordinates data, UT1-UTC data and precesion-nutation extrapolation model. This paper is focused on the pole coordinates data prediction by combination of the least-squares (LS) extrapolation and autoregressive (AR) prediction models (LS+AR). The AR prediction which is applied to the LS extrapolation residuals of pole coordinates data does not able to predict all frequency bands of them and it is mostly tuned to predict subseasonal oscillations. The absolute values of differences between pole coordinates data and their LS+AR predictions increase with prediction length and depend mostly on starting prediction epochs, thus time series of these differences for 2, 4 and 8 weeks in the future were analyzed. Time frequency spectra of these differences for different prediction lengths are very similar showing some power in the frequency band corresponding to the prograde Chandler and annual oscillations, which means that the increase of prediction errors is caused by mismodelling of these oscillations by the LS extrapolation model. Thus, the LS+AR prediction method can be modified by taking into additional AR prediction correction computed from time series of these prediction differences for different prediction lengths. This additional AR prediction is mostly tuned to the seasonal frequency band of pole coordinates data.

  6. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  7. Photometer calibration error using extended standard sources

    NASA Technical Reports Server (NTRS)

    Torr, M. R.; Hays, P. B.; Kennedy, B. C.; Torr, D. G.

    1976-01-01

    As part of a project to compare measurements of the night airglow made by the visible airglow experiment on the Atmospheric Explorer-C satellite, the standard light sources of several airglow observatories were compared with the standard source used in the absolute calibration of the satellite photometer. In the course of the comparison, it has been found that serious calibration errors (up to a factor of two) can arise when a calibration source with a reflecting surface is placed close to an interference filter. For reliable absolute calibration, the source should be located at a distance of at least five filter radii from the interference filter.

  8. Population-based absolute risk estimation with survey data.

    PubMed

    Kovalchik, Stephanie A; Pfeiffer, Ruth M

    2014-04-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  9. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  10. Absolute oral bioavailability of ciprofloxacin.

    PubMed

    Drusano, G L; Standiford, H C; Plaisance, K; Forrest, A; Leslie, J; Caldwell, J

    1986-09-01

    We evaluated the absolute bioavailability of ciprofloxacin, a new quinoline carboxylic acid, in 12 healthy male volunteers. Doses of 200 mg were given to each of the volunteers in a randomized, crossover manner 1 week apart orally and as a 10-min intravenous infusion. Half-lives (mean +/- standard deviation) for the intravenous and oral administration arms were 4.2 +/- 0.77 and 4.11 +/- 0.74 h, respectively. The serum clearance rate averaged 28.5 +/- 4.7 liters/h per 1.73 m2 for the intravenous administration arm. The renal clearance rate accounted for approximately 60% of the corresponding serum clearance rate and was 16.9 +/- 3.0 liters/h per 1.73 m2 for the intravenous arm and 17.0 +/- 2.86 liters/h per 1.73 m2 for the oral administration arm. Absorption was rapid, with peak concentrations in serum occurring at 0.71 +/- 0.15 h. Bioavailability, defined as the ratio of the area under the curve from 0 h to infinity for the oral to the intravenous dose, was 69 +/- 7%. We conclude that ciprofloxacin is rapidly absorbed and reliably bioavailable in these healthy volunteers. Further studies with ciprofloxacin should be undertaken in target patient populations under actual clinical circumstances. PMID:3777908

  11. Absolute Instability in Coupled-Cavity TWTs

    NASA Astrophysics Data System (ADS)

    Hung, D. M. H.; Rittersdorf, I. M.; Zhang, Peng; Lau, Y. Y.; Simon, D. H.; Gilgenbach, R. M.; Chernin, D.; Antonsen, T. M., Jr.

    2014-10-01

    This paper will present results of our analysis of absolute instability in a coupled-cavity traveling wave tube (TWT). The structure mode at the lower and upper band edges are respectively approximated by a hyperbola in the (omega, k) plane. When the Briggs-Bers criterion is applied, a threshold current for onset of absolute instability is observed at the upper band edge, but not the lower band edge. The nonexistence of absolute instability at the lower band edge is mathematically similar to the nonexistence of absolute instability that we recently demonstrated for a dielectric TWT. The existence of absolute instability at the upper band edge is mathematically similar to the existence of absolute instability in a gyroton traveling wave amplifier. These interesting observations will be discussed, and the practical implications will be explored. This work was supported by AFOSR, ONR, and L-3 Communications Electron Devices.

  12. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  13. Testing and evaluation of thermal cameras for absolute temperature measurement

    NASA Astrophysics Data System (ADS)

    Chrzanowski, Krzysztof; Fischer, Joachim; Matyszkiel, Robert

    2000-09-01

    The accuracy of temperature measurement is the most important criterion for the evaluation of thermal cameras used in applications requiring absolute temperature measurement. All the main international metrological organizations currently propose a parameter called uncertainty as a measure of measurement accuracy. We propose a set of parameters for the characterization of thermal measurement cameras. It is shown that if these parameters are known, then it is possible to determine the uncertainty of temperature measurement due to only the internal errors of these cameras. Values of this uncertainty can be used as an objective criterion for comparisons of different thermal measurement cameras.

  14. Deficits in Attention and Visual Processing but not Global Cognition Predict Simulated Driving Errors in Drivers Diagnosed With Mild Alzheimer's Disease.

    PubMed

    Yamin, Stephanie; Stinchcombe, Arne; Gagnon, Sylvain

    2016-06-01

    This study sought to predict driving performance of drivers with Alzheimer's disease (AD) using measures of attention, visual processing, and global cognition. Simulated driving performance of individuals with mild AD (n = 20) was contrasted with performance of a group of healthy controls (n = 21). Performance on measures of global cognitive function and specific tests of attention and visual processing were examined in relation to simulated driving performance. Strong associations were observed between measures of attention, notably the Test of Everyday Attention (sustained attention; r = -.651, P = .002) and the Useful Field of View (r = .563, P = .010), and driving performance among drivers with mild AD. The Visual Object and Space Perception Test-object was significantly correlated with the occurrence of crashes (r = .652, P = .002). Tests of global cognition did not correlate with simulated driving outcomes. The results suggest that professionals exercise caution when extrapolating driving performance based on global cognitive indicators. PMID:26655744

  15. Correlations predict gas-condensate flow through chokes

    SciTech Connect

    Osman, M.E.; Dokla, M.E. )

    1992-03-16

    Empirical correlations have developed to describe the behavior of gas-condensate flow through surface chokes. The field data were obtained from a Middle East gas-condensate reservoir and cover a wide range of flow rates and choke sizes. Correlations for gas-condensate systems have not been previously available. These new correlations will help the production engineer to size chokes for controlling production of gas-condensate wells and predicting the performance of flowing wells under various conditions. Four forms of the correlation were developed and checked against data. One form correlates choke upstream pressure with liquid production rate, gas/liquid ratio, and choke size. The second form uses gas production rate instead of the liquid rate. The other two forms use the pressure drop across the choke instead of upstream pressure. All four of the correlations are presented in this paper as nomograms. Accuracy of the different forms was checked with five error parameters: root-mean-square error, mean-absolute error, simple-mean error, mean-percent-age-absolute error, and mean-percentage error. The correlation was found to be the most accurate when pressure-drop data are used instead of choke upstream pressure.

  16. Absolute negative mobility of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Ou, Ya-li; Hu, Cai-tian; Wu, Jian-chun; Ai, Bao-quan

    2015-12-01

    Transport of interacting Brownian particles in a periodic potential is investigated in the presence of an ac force and a dc force. From Brownian dynamic simulations, we find that both the interaction between particles and the thermal fluctuations play key roles in the absolute negative mobility (the particle noisily moves backwards against a small constant bias). When no the interaction acts, there is only one region where the absolute negative mobility occurs. In the presence of the interaction, the absolute negative mobility may appear in multiple regions. The weak interaction can be helpful for the absolute negative mobility, while the strong interaction has a destructive impact on it.

  17. Direct comparisons between absolute and relative geomagnetic paleointensities: Absolute calibration of a relative paleointensity stack

    NASA Astrophysics Data System (ADS)

    Mochizuki, N.; Yamamoto, Y.; Hatakeyama, T.; Shibuya, H.

    2013-12-01

    Absolute geomagnetic paleointensities (APIs) have been estimated from igneous rocks, while relative paleomagnetic intensities (RPIs) have been reported from sediment cores. These two datasets have been treated separately, as correlations between APIs and RPIs are difficult on account of age uncertainties. High-resolution RPI stacks have been constructed from globally distributed sediment cores with high sedimentation rates. Previous studies often assumed that the RPI stacks have a linear relationship with geomagnetic axial dipole moments, and calibrated the RPI values to API values. However, the assumption of a linear relationship between APIs and RPIs has not been evaluated. Also, a quantitative calibration method for the RPI is lacking. We present a procedure for directly comparing API and RPI stacks, thus allowing reliable calibrations of RPIs. Direct comparisons between APIs and RPIs were conducted with virtually no associated age errors using both tephrochronologic correlations and RPI minima. Using the stratigraphic positions of tephra layers in oxygen isotope stratigraphic records, we directly compared the RPIs and APIs reported from welded tuffs contemporaneously extruded with the tephra layers. In addition, RPI minima during geomagnetic reversals and excursions were compared with APIs corresponding to the reversals and excursions. The comparison of APIs and RPIs at these exact points allowed a reliable calibration of the RPI values. We applied this direct comparison procedure to the global RPI stack PISO-1500. For six independent calibration points, virtual axial dipole moments (VADMs) from the corresponding APIs and RPIs of the PISO-1500 stack showed a near-linear relationship. On the basis of the linear relationship, RPIs of the stack were successfully calibrated to the VADMs. The direct comparison procedure provides an absolute calibration method that will contribute to the recovery of temporal variations and distributions of geomagnetic axial dipole

  18. Accurate calculation of the absolute free energy of binding for drug molecules† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c5sc02678d Click here for additional data file.

    PubMed Central

    Aldeghi, Matteo; Heifetz, Alexander; Bodkin, Michael J.; Knapp, Stefan

    2016-01-01

    Accurate prediction of binding affinities has been a central goal of computational chemistry for decades, yet remains elusive. Despite good progress, the required accuracy for use in a drug-discovery context has not been consistently achieved for drug-like molecules. Here, we perform absolute free energy calculations based on a thermodynamic cycle for a set of diverse inhibitors binding to bromodomain-containing protein 4 (BRD4) and demonstrate that a mean absolute error of 0.6 kcal mol–1 can be achieved. We also show a similar level of accuracy (1.0 kcal mol–1) can be achieved in pseudo prospective approach. Bromodomains are epigenetic mark readers that recognize acetylation motifs and regulate gene transcription, and are currently being investigated as therapeutic targets for cancer and inflammation. The unprecedented accuracy offers the exciting prospect that the binding free energy of drug-like compounds can be predicted for pharmacologically relevant targets. PMID:26798447

  19. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  20. Updated Absolute Flux Calibration of the COS FUV Modes

    NASA Astrophysics Data System (ADS)

    Massa, D.; Ely, J.; Osten, R.; Penton, S.; Aloisi, A.; Bostroem, A.; Roman-Duval, J.; Proffitt, C.

    2014-03-01

    We present newly derived point source absolute flux calibrations for the COS FUV modes at both the original and second lifetime positions. The analysis includes observa- tions through the Primary Science Aperture (PSA) of the standard stars WD0308-565, GD71, WD1057+729 and WD0947+857 obtained as part of two calibration programs. Data were were obtained for all of the gratings at all of the original CENWAVE settings at both the original and second lifetime positions and for the G130M CENWAVE = 1222 at the second lifetime position. Data were also obtained with the FUVB segment for the G130M CENWAVE = 1055 and 1096 setting at the second lifetime position. We also present the derivation of L-flats that were used in processing the data and show that the internal consistency of the primary standards is 1%. The accuracy of the absolute flux calibrations over the UV are estimated to be 1-2% for the medium resolution gratings, and 2-3% over most of the wavelength range of the G140L grating, although the uncertainty can be as large as 5% or more at some G140L wavelengths. We note that these errors are all relative to the optical flux near the V band and small additional errors may be present due to inaccuracies in the V band calibration. In addition, these error estimates are for the time at which the flux calibration data were obtained; the accuracy of the flux calibration at other times can be affected by errors in the time dependent sensitivity (TDS) correction.

  1. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  2. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning

    PubMed Central

    Popa, Laurentiu S.; Streng, Martha L.; Hewitt, Angela L.; Ebner, Timothy J.

    2015-01-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:26112422

  3. Inequalities, Absolute Value, and Logical Connectives.

    ERIC Educational Resources Information Center

    Parish, Charles R.

    1992-01-01

    Presents an approach to the concept of absolute value that alleviates students' problems with the traditional definition and the use of logical connectives in solving related problems. Uses a model that maps numbers from a horizontal number line to a vertical ray originating from the origin. Provides examples solving absolute value equations and…

  4. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  5. Monolithically integrated absolute frequency comb laser system

    DOEpatents

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  6. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  7. Investigating Absolute Value: A Real World Application

    ERIC Educational Resources Information Center

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  8. Absolute Income, Relative Income, and Happiness

    ERIC Educational Resources Information Center

    Ball, Richard; Chernova, Kateryna

    2008-01-01

    This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in absolute terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both absolute and relative income are positively and significantly…

  9. Absolute length measurement using manually decided stereo correspondence for endoscopy

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Koishi, T.; Nakaguchi, T.; Tsumura, N.; Miyake, Y.

    2009-02-01

    In recent years, various kinds of endoscope have been developed and widely used to endoscopic biopsy, endoscopic operation and endoscopy. The size of the inflammatory part is important to determine a method of medical treatment. However, it is not easy to measure absolute size of inflammatory part such as ulcer, cancer and polyp from the endoscopic image. Therefore, it is required measuring the size of those part in endoscopy. In this paper, we propose a new method to measure the absolute length in a straight line between arbitrary two points based on the photogrammetry using endoscope with magnetic tracking sensor which gives camera position and angle. In this method, the stereo-corresponding points between two endoscopic images are determined by the endoscopist without any apparatus of projection and calculation to find the stereo correspondences, then the absolute length can be calculated on the basis of the photogrammetry. The evaluation experiment using a checkerboard showed that the errors of the measurements are less than 2% of the target length when the baseline is sufficiently-long.

  10. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  11. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  12. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  13. Assignment of absolute stereochemistry by computation of optical rotation angles

    NASA Astrophysics Data System (ADS)

    Kondru, Rama Krishna

    We have developed simple wire and molecular orbital models to qualitatively and quantitatively understand optical rotation angles of molecules. We reported the first ab initio theoretical approach to determine the absolute stereochemistry of a complex natural product by calculating molar rotation angles, [M]D. We applied this method for an unambiguous assignment of the absolute stereochemistry of the hennoxazole A. A protocol analogous to population analysis was devised to analyze atomic contributions to the rotation angles for oxiranes, orthoesters, and other organic compounds. The molar rotations for an indoline, an indonone, menthol and menthone were calculated using ab inito methods and compared with experimental values. We reported the first prediction of the absolute configuration of a natural product, i.e. an a priori assignment of the relative and absolute stereochemistry of pitiamide A. Furthermore, we described a strategy that may help to establish structure-function relations for rotation angles by visualizing the electric and magnetic-field perturbations to a molecule's molecular orbitals.

  14. Hitting the target: relatively easy, yet absolutely difficult.

    PubMed

    Mapp, Alistair P; Ono, Hiroshi; Khokhotva, Mykola

    2007-01-01

    It is generally agreed that absolute-direction judgments require information about eye position, whereas relative-direction judgments do not. The source of this eye-position information, particularly during monocular viewing, is a matter of debate. It may be either binocular eye position, or the position of the viewing-eye only, that is crucial. Using more ecologically valid stimulus situations than the traditional LED in the dark, we performed two experiments. In experiment 1, observers threw darts at targets that were fixated either monocularly or binocularly. In experiment 2, observers aimed a laser gun at targets while fixating either the rear or the front gunsight monocularly, or the target either monocularly or binocularly. We measured the accuracy and precision of the observers' absolute- and relative-direction judgments. We found that (a) relative-direction judgments were precise and independent of phoria, and (b) monocular absolute-direction judgments were inaccurate, and the magnitude of the inaccuracy was predictable from the magnitude of phoria. These results confirm that relative-direction judgments do not require information about eye position. Moreover, they show that binocular eye-position information is crucial when judging the absolute direction of both monocular and binocular targets. PMID:17972479

  15. Measurement error analysis of taxi meter

    NASA Astrophysics Data System (ADS)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  16. Measurement of absolute optical thickness of mask glass by wavelength-tuning Fourier analysis.

    PubMed

    Kim, Yangjin; Hbino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-07-01

    Optical thickness is a fundamental characteristic of an optical component. A measurement method combining discrete Fourier-transform (DFT) analysis and a phase-shifting technique gives an appropriate value for the absolute optical thickness of a transparent plate. However, there is a systematic error caused by the nonlinearity of the phase-shifting technique. In this research the absolute optical-thickness distribution of mask blank glass was measured using DFT and wavelength-tuning Fizeau interferometry without using sensitive phase-shifting techniques. The error occurring during the DFT analysis was compensated for by using the unwrapping correlation. The experimental results indicated that the absolute optical thickness of mask glass was measured with an accuracy of 5 nm. PMID:26125394

  17. Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.

    PubMed

    Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong

    2016-03-20

    A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing. PMID:27140578

  18. Absolute optical instruments without spherical symmetry

    NASA Astrophysics Data System (ADS)

    Tyc, Tomáš; Dao, H. L.; Danner, Aaron J.

    2015-11-01

    Until now, the known set of absolute optical instruments has been limited to those containing high levels of symmetry. Here, we demonstrate a method of mathematically constructing refractive index profiles that result in asymmetric absolute optical instruments. The method is based on the analogy between geometrical optics and classical mechanics and employs Lagrangians that separate in Cartesian coordinates. In addition, our method can be used to construct the index profiles of most previously known absolute optical instruments, as well as infinitely many different ones.

  19. "Error Analysis." A Hard Look at Method in Madness.

    ERIC Educational Resources Information Center

    Brown, Cheryl

    1976-01-01

    The origins of error analysis as a pedagogical tool can be traced to the beginnings of the notion of interference and the use of contrastive analysis (CA) to predict learners' errors. With the focus narrowing to actual errors committed by students, it was found that all learners of English as a second language seemed to make errors in the same…

  20. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  1. On-orbit absolute radiance standard for the next generation of IR remote sensing instruments

    NASA Astrophysics Data System (ADS)

    Best, Fred A.; Adler, Douglas P.; Pettersen, Claire; Revercomb, Henry E.; Gero, P. Jonathan; Taylor, Joseph K.; Knuteson, Robert O.; Perepezko, John H.

    2012-11-01

    The next generation of infrared remote sensing satellite instrumentation, including climate benchmark missions will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high-emissivity (<0.999) calibration blackbodies with emissivity uncertainty of better than 0.06%, and absolute temperature uncertainties of better than 0.045K (k=3). Key elements of an On-Orbit Absolute Radiance Standard (OARS) meeting these stringent requirements have been demonstrated in the laboratory at the University of Wisconsin (UW) and refined under the NASA Instrument Incubator Program (IIP). This work recently culminated with an integrated subsystem that was used in the laboratory to demonstrate end-to-end radiometric accuracy verification for the UW Absolute Radiance Interferometer. Along with an overview of the design, we present details of a key underlying technology of the OARS that provides on-orbit absolute temperature calibration using the transient melt signatures of small quantities (<1g) of reference materials (gallium, water, and mercury) imbedded in the blackbody cavity. In addition we present performance data from the laboratory testing of the OARS.

  2. Pantomime-Grasping: Advance Knowledge of Haptic Feedback Availability Supports an Absolute Visuo-Haptic Calibration

    PubMed Central

    Davarpanah Jazi, Shirin; Heath, Matthew

    2016-01-01

    An emerging issue in movement neurosciences is whether haptic feedback influences the nature of the information supporting a simulated grasping response (i.e., pantomime-grasping). In particular, recent work by our group contrasted pantomime-grasping responses performed with (i.e., PH+ trials) and without (i.e., PH− trials) terminal haptic feedback in separate blocks of trials. Results showed that PH− trials were mediated via relative visual information. In contrast, PH+ trials showed evidence of an absolute visuo-haptic calibration—a finding attributed to an error signal derived from a comparison between expected and actual haptic feedback (i.e., an internal forward model). The present study examined whether advanced knowledge of haptic feedback availability influences the aforementioned calibration process. To that end, PH− and PH+ trials were completed in separate blocks (i.e., the feedback schedule used in our group’s previous study) and a block wherein PH− and PH+ trials were randomly interleaved on a trial-by-trial basis (i.e., random feedback schedule). In other words, the random feedback schedule precluded participants from predicting whether haptic feedback would be available at the movement goal location. We computed just-noticeable-difference (JND) values to determine whether responses adhered to, or violated, the relative psychophysical principles of Weber’s law. Results for the blocked feedback schedule replicated our group’s previous work, whereas in the random feedback schedule PH− and PH+ trials were supported via relative visual information. Accordingly, we propose that a priori knowledge of haptic feedback is necessary to support an absolute visuo-haptic calibration. Moreover, our results demonstrate that the presence and expectancy of haptic feedback is an important consideration in contrasting the behavioral and neural properties of natural and simulated grasping. PMID:27199718

  3. Absolute magnitudes of trans-neptunian objects

    NASA Astrophysics Data System (ADS)

    Duffard, R.; Alvarez-candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Morales, N.; Santos-Sanz, P.; Thirouin, A.

    2015-10-01

    Accurate measurements of diameters of trans- Neptunian objects are extremely complicated to obtain. Radiomatric techniques applied to thermal measurements can provide good results, but precise absolute magnitudes are needed to constrain diameters and albedos. Our objective is to measure accurate absolute magnitudes for a sample of trans- Neptunian objects, many of which have been observed, and modelled, by the "TNOs are cool" team, one of Herschel Space Observatory key projects grantes with ~ 400 hours of observing time. We observed 56 objects in filters V and R, if possible. These data, along with data available in the literature, was used to obtain phase curves and to measure absolute magnitudes by assuming a linear trend of the phase curves and considering magnitude variability due to rotational light-curve. In total we obtained 234 new magnitudes for the 56 objects, 6 of them with no reported previous measurements. Including the data from the literature we report a total of 109 absolute magnitudes.

  4. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  5. Locomotor Expertise Predicts Infants' Perseverative Errors

    ERIC Educational Resources Information Center

    Berger, Sarah E.

    2010-01-01

    This research examined the development of inhibition in a locomotor context. In a within-subjects design, infants received high- and low-demand locomotor A-not-B tasks. In Experiment 1, walking 13-month-old infants followed an indirect path to a goal. In a control condition, infants took a direct route. In Experiment 2, crawling and walking…

  6. Predicted Errors in Children's Early Sentence Comprehension

    ERIC Educational Resources Information Center

    Gertner, Yael; Fisher, Cynthia

    2012-01-01

    Children use syntax to interpret sentences and learn verbs; this is syntactic bootstrapping. The structure-mapping account of early syntactic bootstrapping proposes that a partial representation of sentence structure, the "set of nouns" occurring with the verb, guides initial interpretation and provides an abstract format for new learning. This…

  7. Absolute Radiometric Calibration of KOMPSAT-3A

    NASA Astrophysics Data System (ADS)

    Ahn, H. Y.; Shin, D. Y.; Kim, J. S.; Seo, D. C.; Choi, C. U.

    2016-06-01

    This paper presents a vicarious radiometric calibration of the Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) performed by the Korea Aerospace Research Institute (KARI) and the Pukyong National University Remote Sensing Group (PKNU RSG) in 2015.The primary stages of this study are summarized as follows: (1) A field campaign to determine radiometric calibrated target fields was undertaken in Mongolia and South Korea. Surface reflectance data obtained in the campaign were input to a radiative transfer code that predicted at-sensor radiance. Through this process, equations and parameters were derived for the KOMPSAT-3A sensor to enable the conversion of calibrated DN to physical units, such as at-sensor radiance or TOA reflectance. (2) To validate the absolute calibration coefficients for the KOMPSAT-3A sensor, we performed a radiometric validation with a comparison of KOMPSAT-3A and Landsat-8 TOA reflectance using one of the six PICS (Libya 4). Correlations between top-of-atmosphere (TOA) radiances and the spectral band responses of the KOMPSAT-3A sensors at the Zuunmod, Mongolia and Goheung, South Korea sites were significant for multispectral bands. The average difference in TOA reflectance between KOMPSAT-3A and Landsat-8 image over the Libya 4, Libya site in the red-green-blue (RGB) region was under 3%, whereas in the NIR band, the TOA reflectance of KOMPSAT-3A was lower than the that of Landsat-8 due to the difference in the band passes of two sensors. The KOMPSAT-3Aensor includes a band pass near 940 nm that can be strongly absorbed by water vapor and therefore displayed low reflectance. Toovercome this, we need to undertake a detailed analysis using rescale methods, such as the spectral bandwidth adjustment factor.

  8. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients.

    PubMed

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an 'intraocular pressure (IOP)-integrated VF trend analysis' was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  9. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients

    PubMed Central

    Asaoka, Ryo; Fujino, Yuri; Murata, Hiroshi; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an ‘intraocular pressure (IOP)-integrated VF trend analysis’ was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  10. SEU induced errors observed in microprocessor systems

    SciTech Connect

    Asenek, V.; Underwood, C.; Oldfield, M.; Velazco, R.; Rezgui, S.; Cheynet, P.; Ecoffet, R.

    1998-12-01

    In this paper, the authors present software tools for predicting the rate and nature of observable SEU induced errors in microprocessor systems. These tools are built around a commercial microprocessor simulator and are used to analyze real satellite application systems. Results obtained from simulating the nature of SEU induced errors are shown to correlate with ground-based radiation test data.

  11. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  12. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  13. ON A NEW NEAR-INFRARED METHOD TO ESTIMATE THE ABSOLUTE AGES OF STAR CLUSTERS: NGC 3201 AS A FIRST TEST CASE

    SciTech Connect

    Bono, G.; Di Cecco, A.; Sanna, N.; Buonanno, R.; Stetson, P. B.; VandenBerg, D. A.; Calamida, A.; Amico, P.; Marchetti, E.; D'Odorico, S.; Gilmozzi, R.; Dall'Ora, M.; Iannicola, G.; Caputo, F.; Corsi, C. E.; Ferraro, I.; Monelli, M.; Walker, A. R.; Zoccali, M.; Degl'Innocenti, S.

    2010-01-10

    We present a new method to estimate the absolute ages of stellar systems. This method is based on the difference in magnitude between the main-sequence turnoff (MSTO) and a well-defined knee located along the lower main sequence (MSK). This feature is caused by the collisionally induced absorption of molecular hydrogen, and it can easily be identified in near-infrared (NIR) and in optical-NIR color-magnitude diagrams of stellar systems. We took advantage of deep and accurate NIR images collected with the Multi-Conjugate Adaptive Optics Demonstrator temporarily available on the Very Large Telescope and of optical images collected with the Advanced Camera for Surveys Wide Field Camera on the Hubble Space Telescope and with ground-based telescopes to estimate the absolute age of the globular NGC 3201 using both the MSTO and the {delta}(MSTO-MSK). We have adopted a new set of cluster isochrones, and we found that the absolute ages based on the two methods agree to within 1{sigma}. However, the errors of the ages based on the {delta}(MSTO-MSK) method are potentially more than a factor of 2 smaller, since they are not affected by uncertainties in cluster distance or reddening. Current isochrones appear to predict slightly bluer ({approx}0.05 mag) NIR and optical-NIR colors than observed for magnitudes fainter than the MSK.

  14. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  15. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  16. Correction due to the finite speed of light in absolute gravimeters Correction due to the finite speed of light in absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Nagornyi, V. D.; Zanimonskiy, Y. M.; Zanimonskiy, Y. Y.

    2011-06-01

    Equations (45) and (47) in our paper [1] in this issue have incorrect sign and should read \\tilde T_i=T_i+{b\\mp S_i\\over c},\\cr\\tilde T_i=T_i\\mp {S_i\\over c}. The error traces back to our formula (3), inherited from the paper [2]. According to the technical documentation [3, 4], the formula (3) is implemented by several commercially available instruments. An incorrect sign would cause a bias of about 20 µGal not known for these instruments, which probably indicates that the documentation incorrectly reflects the implemented measurement equation. Our attention to the error was drawn by the paper [5], also in this issue, where the sign is mentioned correctly. References [1] Nagornyi V D, Zanimonskiy Y M and Zanimonskiy Y Y 2011 Correction due to the finite speed of light in absolute gravimeters Metrologia 48 101-13 [2] Niebauer T M, Sasagawa G S, Faller J E, Hilt R and Klopping F 1995 A new generation of absolute gravimeters Metrologia 32 159-80 [3] Micro-g LaCoste, Inc. 2006 FG5 Absolute Gravimeter Users Manual [4] Micro-g LaCoste, Inc. 2007 g7 Users Manual [5] Niebauer T M, Billson R, Ellis B, Mason B, van Westrum D and Klopping F 2011 Simultaneous gravity and gradient measurements from a recoil-compensated absolute gravimeter Metrologia 48 154-63

  17. [Paradigm errors in the old biomedical science].

    PubMed

    Skurvydas, Albertas

    2008-01-01

    The aim of this article was to review the basic drawbacks of the deterministic and reductionistic thinking in biomedical science and to provide ways for dealing with them. The present paradigm of research in biomedical science has not got rid of the errors of the old science yet, i.e. the errors of absolute determinism and reductionism. These errors restrict the view and thinking of scholars engaged in the studies of complex and dynamic phenomena and mechanisms. Recently, discussions on science paradigm aimed at spreading the new science paradigm that of complex dynamic systems as well as chaos theory are in progress all over the world. It is for the nearest future to show which of the two, the old or the new science, will be the winner. We have come to the main conclusion that deterministic and reductionistic thinking applied in improper way can cause substantial damage rather than prove benefits for biomedicine science. PMID:18541951

  18. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  19. Application of wavelet neural network model based on genetic algorithm in the prediction of high-speed railway settlement

    NASA Astrophysics Data System (ADS)

    Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing

    2015-12-01

    With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.

  20. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  1. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed. PMID:19831037

  2. Son preference in Indian families: absolute versus relative wealth effects.

    PubMed

    Gaudin, Sylvestre

    2011-02-01

    The desire for male children is prevalent in India, where son preference has been shown to affect fertility behavior and intrahousehold allocation of resources. Economic theory predicts less gender discrimination in wealthier households, but demographers and sociologists have argued that wealth can exacerbate bias in the Indian context. I argue that these apparently conflicting theories can be reconciled and simultaneously tested if one considers that they are based on two different notions of wealth: one related to resource constraints (absolute wealth), and the other to notions of local status (relative wealth). Using cross-sectional data from the 1998-1999 and 2005-2006 National Family and Health Surveys, I construct measures of absolute and relative wealth by using principal components analysis. A series of statistical models of son preference is estimated by using multilevel methods. Results consistently show that higher absolute wealth is strongly associated with lower son preference, and the effect is 20%-40% stronger when the household's community-specific wealth score is included in the regression. Coefficients on relative wealth are positive and significant although lower in magnitude. Results are robust to using different samples, alternative groupings of households in local areas, different estimation methods, and alternative dependent variables. PMID:21302027

  3. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  4. On the convective-absolute nature of river bedform instabilities

    NASA Astrophysics Data System (ADS)

    Vesipa, Riccardo; Camporeale, Carlo; Ridolfi, Luca; Chomaz, Jean Marc

    2014-12-01

    River dunes and antidunes are induced by the morphological instability of stream-sediment boundary. Such bedforms raise a number of subtle theoretical questions and are crucial for many engineering and environmental problems. Despite their importance, the absolute/convective nature of the instability has never been addressed. The present work fills this gap as we demonstrate, by the cusp map method, that dune instability is convective for all values of the physical control parameters, while the antidune instability exhibits both behaviors. These theoretical predictions explain some previous experimental and numerical observations and are important to correctly plan flume experiments, numerical simulations, paleo-hydraulic reconstructions, and river works.

  5. Refractive errors in children.

    PubMed

    Tongue, A C

    1987-12-01

    Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238

  6. Error-prone signalling.

    PubMed

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  7. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %). PMID:27186503

  8. Morphology and Absolute Magnitudes of the SDSS DR7 QSOs

    NASA Astrophysics Data System (ADS)

    Coelho, B.; Andrei, A. H.; Antón, S.

    2014-10-01

    The ESA mission Gaia will furnish a complete census of the Milky Way, delivering astrometrics, dynamics, and astrophysics information for 1 billion stars. Operating in all-sky repeated survey mode, Gaia will also provide measurements of extra-galactic objects. Among the later there will be at least 500,000 QSOs that will be used to build the reference frame upon which the several independent observations will be combined and interpreted. Not all the QSOs are equally suited to fulfill this role of fundamental, fiducial grid-points. Brightness, morphology, and variability define the astrometric error budget for each object. We made use of 3 morphological parameters based on the PSF sharpness, circularity and gaussianity, which enable us to distinguish the "real point-like" QSOs. These parameters are being explored on the spectroscopically certified QSOs of the SDSS DR7, to compare the performance against other morphology classification schemes, as well as to derive properties of the host galaxy. We present a new method, based on the Gaia quasar database, to derive absolute magnitudes, on the SDSS filters domain. The method can be extrapolated all over the optical window, including the Gaia filters. We discuss colors derived from SDSS apparent magnitudes and colors based on absolute magnitudes that we obtained tanking into account corrections for dust extinction, either intergalactic or from the QSO host, and for the Lyman α forest. In the future we want to further discuss properties of the host galaxies, comparing for e.g. the obtained morphological classification with the color, the apparent and absolute magnitudes, and the redshift distributions.

  9. Absolute blood velocity measured with a modified fundus camera

    NASA Astrophysics Data System (ADS)

    Duncan, Donald D.; Lemaillet, Paul; Ibrahim, Mohamed; Nguyen, Quan Dong; Hiller, Matthias; Ramella-Roman, Jessica

    2010-09-01

    We present a new method for the quantitative estimation of blood flow velocity, based on the use of the Radon transform. The specific application is for measurement of blood flow velocity in the retina. Our modified fundus camera uses illumination from a green LED and captures imagery with a high-speed CCD camera. The basic theory is presented, and typical results are shown for an in vitro flow model using blood in a capillary tube. Subsequently, representative results are shown for representative fundus imagery. This approach provides absolute velocity and flow direction along the vessel centerline or any lateral displacement therefrom. We also provide an error analysis allowing estimation of confidence intervals for the estimated velocity.

  10. Measured and modelled absolute gravity changes in Greenland

    NASA Astrophysics Data System (ADS)

    Nielsen, J. Emil; Forsberg, Rene; Strykowski, Gabriel

    2014-01-01

    In glaciated areas, the Earth is responding to the ongoing changes of the ice sheets, a response known as glacial isostatic adjustment (GIA). GIA can be investigated through observations of gravity change. For the ongoing assessment of the ice sheets mass balance, where satellite data are used, the study of GIA is important since it acts as an error source. GIA consists of three signals as seen by a gravimeter on the surface of the Earth. These signals are investigated in this study. The ICE-5G ice history and recently developed ice models of present day changes are used to model the gravity change in Greenland. The result is compared with the initial measurements of absolute gravity (AG) change at selected Greenland Network (GNET) sites.

  11. Full field imaging based instantaneous hyperspectral absolute refractive index measurement

    SciTech Connect

    Baba, Justin S; Boudreaux, Philip R

    2012-01-01

    Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.

  12. Error handling strategies in multiphase inverse modeling

    SciTech Connect

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  13. Absolute isotopic abundances of TI in meteorites

    NASA Astrophysics Data System (ADS)

    Niederer, F. R.; Papanastassiou, D. A.; Wasserburg, G. J.

    1985-03-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46Ti/48Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. The authors provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components.

  14. Molecular iodine absolute frequencies. Final report

    SciTech Connect

    Sansonetti, C.J.

    1990-06-25

    Fifty specified lines of {sup 127}I{sub 2} were studied by Doppler-free frequency modulation spectroscopy. For each line the classification of the molecular transition was determined, hyperfine components were identified, and one well-resolved component was selected for precise determination of its absolute frequency. In 3 cases, a nearby alternate line was selected for measurement because no well-resolved component was found for the specified line. Absolute frequency determinations were made with an estimated uncertainty of 1.1 MHz by locking a dye laser to the selected hyperfine component and measuring its wave number with a high-precision Fabry-Perot wavemeter. For each line results of the absolute measurement, the line classification, and a Doppler-free spectrum are given.

  15. Closed-loop step motor control using absolute encoders

    SciTech Connect

    Hicks, J.S.; Wright, M.C.

    1997-08-01

    A multi-axis, step motor control system was developed to accurately position and control the operation of a triple axis spectrometer at the High Flux Isotope Reactor (HFIR) located at Oak Ridge National Laboratory. Triple axis spectrometers are used in neutron scattering and diffraction experiments and require highly accurate positioning. This motion control system can handle up to 16 axes of motion. Four of these axes are outfitted with 17-bit absolute encoders. These four axes are controlled with a software feedback loop that terminates the move based on real-time position information from the absolute encoders. Because the final position of the actuator is used to stop the motion of the step motors, the moves can be made accurately in spite of the large amount of mechanical backlash from a chain drive between the motors and the spectrometer arms. A modified trapezoidal profile, custom C software, and an industrial PC, were used to achieve a positioning accuracy of 0.00275 degrees of rotation. A form of active position maintenance ensures that the angles are maintained with zero error or drift.

  16. Stitching interferometry and absolute surface shape metrology: similarities

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-12-01

    Stitching interferometry is a method of analysing large optical components using a standard small interferometer. This result is obtained by taking multiple overlapping images of the large component, and numerically stitching these sub-apertures together by computing a correcting Tip- Tilt-Piston correction for each sub-aperture. All real-life measurement techniques require a calibration phase. By definition, a perfect surface does not exist. Methods abound for the accurate measurement of diameters (viz., the Three Flat Test). However, we need total surface knowledge of the reference surface, because the stitched overlap areas will suffer from the slightest deformation. One must not be induced into thinking that Stitching is the cause of this error: it simply highlights the lack of absolute knowledge of the reference surface, or the lack of adequate thermal control, issues which are often sidetracked... The goal of this paper is to highlight the above-mentioned calibration problems in interferometry in general, and in stitching interferometry in particular, and show how stitching hardware and software can be conveniently used to provide the required absolute surface shape metrology. Some measurement figures will illustrate this article.

  17. Predictors of medication errors among elderly hospitalized patients.

    PubMed

    Picone, Debra Matsen; Titler, Marita G; Dochterman, Joanne; Shever, Leah; Kim, Taikyoung; Abramowitz, Paul; Kanak, Mary; Qin, Rui

    2008-01-01

    Medication errors are a serious safety concern and most errors are preventable. A retrospective study design was employed to describe medication errors experienced during 10187 hospitalizations of elderly patients admitted to a Midwest teaching hospital between July 1, 1998 and December 31, 2001 and to determine the factors predictive of medication errors. The model considered patient characteristics, clinical conditions, interventions, and nursing unit characteristics. The dependent variable, medication error, was measured using a voluntary incident reporting system. There were 861 medication errors; 96% may have been preventable. Most errors were omissions errors (48.8%) and the source was administration (54%) or transcription errors (38%). Variables associated with a medication error included unique number of medications (polypharmacy), patient gender and race, RN staffing changes, medical and nursing interventions, and specific pharmacological agents. Further validation of this explanatory model and focused interventions may help decrease the incidence of medication errors. PMID:18305099

  18. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  19. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  20. Precise Measurement of the Absolute Fluorescence Yield

    NASA Astrophysics Data System (ADS)

    Ave, M.; Bohacova, M.; Daumiller, K.; Di Carlo, P.; di Giulio, C.; San Luis, P. Facal; Gonzales, D.; Hojvat, C.; Hörandel, J. R.; Hrabovsky, M.; Iarlori, M.; Keilhauer, B.; Klages, H.; Kleifges, M.; Kuehn, F.; Monasor, M.; Nozka, L.; Palatka, M.; Petrera, S.; Privitera, P.; Ridky, J.; Rizi, V.; D'Orfeuil, B. Rouille; Salamida, F.; Schovanek, P.; Smida, R.; Spinka, H.; Ulrich, A.; Verzi, V.; Williams, C.

    2011-09-01

    We present preliminary results of the absolute yield of fluorescence emission in atmospheric gases. Measurements were performed at the Fermilab Test Beam Facility with a variety of beam particles and gases. Absolute calibration of the fluorescence yield to 5% level was achieved by comparison with two known light sources--the Cherenkov light emitted by the beam particles, and a calibrated nitrogen laser. The uncertainty of the energy scale of current Ultra-High Energy Cosmic Rays experiments will be significantly improved by the AIRFLY measurement.

  1. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  2. Chaos time series prediction based on membrane optimization algorithms.

    PubMed

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  3. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  4. Prediction of BP reactivity to talking using hybrid soft computing approaches.

    PubMed

    Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar

    2014-01-01

    High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables. PMID:25328536

  5. Prediction of BP Reactivity to Talking Using Hybrid Soft Computing Approaches

    PubMed Central

    Arora, Ajat Shatru; Jain, Vijender Kumar

    2014-01-01

    High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R 2), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables. PMID:25328536

  6. Two-stage model of African absolute motion during the last 30 million years

    NASA Astrophysics Data System (ADS)

    Pollitz, Fred F.

    1991-07-01

    The absolute motion of Africa (relative to the hotspots) for the past 30 My is modeled with two Euler vectors, with a change occurring at 6 Ma. Because of the high sensitivity of African absolute motions to errors in the absolute motions of the North America and Pacific plates, both the pre-6 Ma and post-6 Ma African absolute motions are determined simultaneously with North America and Pacific absolute motions for various epochs. Geologic data from the northern Atlantic and hotspot tracks from the African plate are used to augment previous data sets for the North America and Pacific plates. The difference between the pre-6 Ma and post-6 Ma absolute plate motions may be represented as a counterclockwise rotation about a pole at 48 °S, 84 °E, with angular velocity 0.085 °/My. This change is supported by geologic evidence along a large portion of the African plate boundary, including the Red Sea and Gulf of Aden spreading systems, the Alpine deformation zone, and the central and southern mid-Atlantic Ridge. Although the change is modeled as one abrupt transition at 6 Ma, it wa