Science.gov

Sample records for absolute prediction error

  1. Accurate absolute GPS positioning through satellite clock error estimation

    NASA Astrophysics Data System (ADS)

    Han, S.-C.; Kwon, J. H.; Jekeli, C.

    2001-05-01

    An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them.

  2. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  3. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  4. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  5. Predictions of Ligand Selectivity from Absolute Binding Free Energy Calculations

    PubMed Central

    2016-01-01

    Binding selectivity is a requirement for the development of a safe drug, and it is a critical property for chemical probes used in preclinical target validation. Engineering selectivity adds considerable complexity to the rational design of new drugs, as it involves the optimization of multiple binding affinities. Computationally, the prediction of binding selectivity is a challenge, and generally applicable methodologies are still not available to the computational and medicinal chemistry communities. Absolute binding free energy calculations based on alchemical pathways provide a rigorous framework for affinity predictions and could thus offer a general approach to the problem. We evaluated the performance of free energy calculations based on molecular dynamics for the prediction of selectivity by estimating the affinity profile of three bromodomain inhibitors across multiple bromodomain families, and by comparing the results to isothermal titration calorimetry data. Two case studies were considered. In the first one, the affinities of two similar ligands for seven bromodomains were calculated and returned excellent agreement with experiment (mean unsigned error of 0.81 kcal/mol and Pearson correlation of 0.75). In this test case, we also show how the preferred binding orientation of a ligand for different proteins can be estimated via free energy calculations. In the second case, the affinities of a broad-spectrum inhibitor for 22 bromodomains were calculated and returned a more modest accuracy (mean unsigned error of 1.76 kcal/mol and Pearson correlation of 0.48); however, the reparametrization of a sulfonamide moiety improved the agreement with experiment. PMID:28009512

  6. Absolute Plate Velocities from Seismic Anisotropy: Importance of Correlated Errors

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Zheng, L.; Kreemer, C.

    2014-12-01

    The orientation of seismic anisotropy inferred beneath the interiors of plates may provide a means to estimate the motions of the plate relative to the deeper mantle. Here we analyze a global set of shear-wave splitting data to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. The errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11º Ma-1 (95% confidence limits) right-handed about 57.1ºS, 68.6ºE. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2°) differs insignificantly from that for continental lithosphere (σ=21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4°) than for continental lithosphere (σ=14.7°). Two of the slowest-moving plates, Antarctica (vRMS=4 mm a-1, σ=29°) and Eurasia (vRMS=3 mm a-1, σ=33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈5 mm a-1 to result in seismic anisotropy useful for estimating plate motion.

  7. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  8. Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles

    ERIC Educational Resources Information Center

    Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner

    2016-01-01

    This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…

  9. Auditory working memory predicts individual differences in absolute pitch learning.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  10. Error analysis in newborn screening: can quotients support the absolute values?

    PubMed

    Arneth, Borros; Hintz, Martin

    2017-03-01

    Newborn screening is performed using modern tandem mass spectrometry, which can simultaneously detect a variety of analytes, including several amino acids and fatty acids. Tandem mass spectrometry measures the diagnostic parameters as absolute concentrations and produces fragments which are used as markers of specific substances. Several prominent quotients can also be derived, which are quotients of two absolute measured concentrations. In this study, we determined the precision of both the absolute concentrations and the derived quotients. First, the measurement error of the absolute concentrations and the measurement error of the ratios were practically determined. Then, the Gaussian theory of error calculation was used. Finally, these errors were compared with one another. The practical analytical accuracies of the quotients were significantly higher (e.g., coefficient of variation (CV) = 5.1% for the phenylalanine to tyrosine (Phe/Tyr) quotient and CV = 5.6% for the Fisher quotient) than the accuracies of the absolute measured concentrations (mean CVs = 12%). According to our results, the ratios are analytically correct and, from an analytical point of view, can support the absolute values in finding the correct diagnosis.

  11. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  12. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  13. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  14. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  15. Position error correction in absolute surface measurement based on a multi-angle averaging method

    NASA Astrophysics Data System (ADS)

    Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin

    2017-04-01

    We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.

  16. Absolute Time Error Calibration of GPS Receivers Using Advanced GPS Simulators

    DTIC Science & Technology

    1997-12-01

    29th Annual Precise Time a d Time Interval (PTTI) Meeting ABSOLUTE TIME ERROR CALIBRATION OF GPS RECEIVERS USING ADVANCED GPS SIMULATORS E.D...DC 20375 USA Abstract Preche time transfer eq)er&nen& using GPS with t h e stabd?v’s under ten nanoseconh are common& being reported willrbr the... time transfer communily. Relarive calibrations are done by naeasurhg the time error of one GPS receiver versus a “known master refmence receiver.” Z?t

  17. Striatal prediction error modulates cortical coupling.

    PubMed

    den Ouden, Hanneke E M; Daunizeau, Jean; Roiser, Jonathan; Friston, Karl J; Stephan, Klaas E

    2010-03-03

    Both perceptual inference and motor responses are shaped by learned probabilities. For example, stimulus-induced responses in sensory cortices and preparatory activity in premotor cortex reflect how (un)expected a stimulus is. This is in accordance with predictive coding accounts of brain function, which posit a fundamental role of prediction errors for learning and adaptive behavior. We used functional magnetic resonance imaging and recent advances in computational modeling to investigate how (failures of) learned predictions about visual stimuli influence subsequent motor responses. Healthy volunteers discriminated visual stimuli that were differentially predicted by auditory cues. Critically, the predictive strengths of cues varied over time, requiring subjects to continuously update estimates of stimulus probabilities. This online inference, modeled using a hierarchical Bayesian learner, was reflected behaviorally: speed and accuracy of motor responses increased significantly with predictability of the stimuli. We used nonlinear dynamic causal modeling to demonstrate that striatal prediction errors are used to tune functional coupling in cortical networks during learning. Specifically, the degree of striatal trial-by-trial prediction error activity controls the efficacy of visuomotor connections and thus the influence of surprising stimuli on premotor activity. This finding substantially advances our understanding of striatal function and provides direct empirical evidence for formal learning theories that posit a central role for prediction error-dependent plasticity.

  18. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  19. Preliminary error budget for the reflected solar instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Astrophysics Data System (ADS)

    Thome, K.; Gubbels, T.; Barnes, R.

    2011-10-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI-traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables. The instrument suite includes emitted infrared spectrometers, global navigation receivers for radio occultation, and reflected solar spectrometers. The measurements will be acquired for a period of five years and will enable follow-on missions to extend the climate record over the decades needed to understand climate change. This work describes a preliminary error budget for the RS sensor. The RS sensor will retrieve at-sensor reflectance over the spectral range from 320 to 2300 nm with 500-m GIFOV and a 100-km swath width. The current design is based on an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm. Reflectance is obtained from the ratio of measurements of radiance while viewing the earth's surface to measurements of irradiance while viewing the sun. The requirement for the RS instrument is that the reflectance must be traceable to SI standards at an absolute uncertainty <0.3%. The calibration approach to achieve the ambitious 0.3% absolute calibration uncertainty is predicated on a reliance on heritage hardware, reduction of sensor complexity, and adherence to detector-based calibration standards. The design above has been used to develop a preliminary error budget that meets the 0.3% absolute requirement. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and

  20. Improving the Prediction of Absolute Solvation Free Energies Using the Next Generation OPLS Force Field.

    PubMed

    Shivakumar, Devleena; Harder, Edward; Damm, Wolfgang; Friesner, Richard A; Sherman, Woody

    2012-08-14

    Explicit solvent molecular dynamics free energy perturbation simulations were performed to predict absolute solvation free energies of 239 diverse small molecules. We use OPLS2.0, the next generation OPLS force field, and compare the results with popular small molecule force fields-OPLS_2005, GAFF, and CHARMm-MSI. OPLS2.0 produces the best correlation with experimental data (R(2) = 0.95, slope = 0.96) and the lowest average unsigned errors (0.7 kcal/mol). Important classes of compounds that performed suboptimally with OPLS_2005 show significant improvements.

  1. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  2. Formal Estimation of Errors in Computed Absolute Interaction Energies of Protein-ligand Complexes

    PubMed Central

    Faver, John C.; Benson, Mark L.; He, Xiao; Roberts, Benjamin P.; Wang, Bing; Marshall, Michael S.; Kennedy, Matthew R.; Sherrill, C. David; Merz, Kenneth M.

    2011-01-01

    A largely unsolved problem in computational biochemistry is the accurate prediction of binding affinities of small ligands to protein receptors. We present a detailed analysis of the systematic and random errors present in computational methods through the use of error probability density functions, specifically for computed interaction energies between chemical fragments comprising a protein-ligand complex. An HIV-II protease crystal structure with a bound ligand (indinavir) was chosen as a model protein-ligand complex. The complex was decomposed into twenty-one (21) interacting fragment pairs, which were studied using a number of computational methods. The chemically accurate complete basis set coupled cluster theory (CCSD(T)/CBS) interaction energies were used as reference values to generate our error estimates. In our analysis we observed significant systematic and random errors in most methods, which was surprising especially for parameterized classical and semiempirical quantum mechanical calculations. After propagating these fragment-based error estimates over the entire protein-ligand complex, our total error estimates for many methods are large compared to the experimentally determined free energy of binding. Thus, we conclude that statistical error analysis is a necessary addition to any scoring function attempting to produce reliable binding affinity predictions. PMID:21666841

  3. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  4. Working memory load strengthens reward prediction errors.

    PubMed

    Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David

    2017-03-20

    Reinforcement learning in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we asked how working memory and incremental reinforcement learning processes interact to guide human learning. Working memory load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive working memory process together with slower reinforcement learning. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to reward prediction error, as shown previously, but critically, these signals were reduced when the learning problem was within capacity of working memory. The degree of this neural interaction related to individual differences in the use of working memory to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning.SIGNIFICANCE STATEMENTReinforcement learning theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning, and other mechanisms such as prefrontal cortex working memory, also play a key role. Our results show in addition that these other players interact with the dopaminergic RL system, interfering with its key computation of reward predictions errors.

  5. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  6. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  7. Locomotor expertise predicts infants' perseverative errors.

    PubMed

    Berger, Sarah E

    2010-03-01

    This research examined the development of inhibition in a locomotor context. In a within-subjects design, infants received high- and low-demand locomotor A-not-B tasks. In Experiment 1, walking 13-month-old infants followed an indirect path to a goal. In a control condition, infants took a direct route. In Experiment 2, crawling and walking 13-month-old infants crawled through a tunnel to reach a goal at the other end and received the same control condition as in Experiment 1. In both experiments, perseverative errors occurred more often in the high-demand condition than in the low-demand condition. Moreover, in Experiment 2, walkers perseverated more than crawlers, and extent of perseveration was related to infants' locomotor experience. In Experiment 3, the authors addressed a possible confound in Experiment 2 between locomotor expertise and locomotor posture. Novice crawlers perseverated in the difficult tunnels condition, behaving more like novice walkers than expert crawlers. As predicted by a cognitive capacity account of infant perseveration, overtaxed attentional resources resulted in a cognition-action trade-off. Experts who found the task less motorically effortful than novices had more cognitive resources available for problem solving.

  8. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  9. Prediction and simulation errors in parameter estimation for nonlinear systems

    NASA Astrophysics Data System (ADS)

    Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.

    2010-11-01

    This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.

  10. Demonstrating the error budget for the climate absolute radiance and refractivity observatory through solar irradiance measurements (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis J.; McCorkel, Joel; Angal, Amit

    2016-09-01

    The goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to provide high-accuracy data for evaluation of long-term climate change trends. Essential to the CLARREO project is demonstration of SI-traceable, reflected measurements that are a factor of 10 more accurate than current state-of-the-art sensors. The CLARREO approach relies on accurate, monochromatic absolute radiance calibration in the laboratory transferred to orbit via solar irradiance knowledge. The current work describes the results of field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) that is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. Recent measurements of absolute spectral solar irradiance using SOLARIS are presented. The ground-based SOLARIS data are corrected to top-of-atmosphere values using AERONET data collected within 5 km of the SOLARIS operation. The SOLARIS data are converted to absolute irradiance using laboratory calibrations based on the Goddard Laser for Absolute Measurement of Radiance (GLAMR). Results are compared to accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  11. Curiosity and reward: Valence predicts choice and information prediction errors enhance learning.

    PubMed

    Marvin, Caroline B; Shohamy, Daphna

    2016-03-01

    Curiosity drives many of our daily pursuits and interactions; yet, we know surprisingly little about how it works. Here, we harness an idea implied in many conceptualizations of curiosity: that information has value in and of itself. Reframing curiosity as the motivation to obtain reward-where the reward is information-allows one to leverage major advances in theoretical and computational mechanisms of reward-motivated learning. We provide new evidence supporting 2 predictions that emerge from this framework. First, we find an asymmetric effect of positive versus negative information, with positive information enhancing both curiosity and long-term memory for information. Second, we find that it is not the absolute value of information that drives learning but, rather, the gap between the reward expected and reward received, an "information prediction error." These results support the idea that information functions as a reward, much like money or food, guiding choices and driving learning in systematic ways.

  12. Conditional Standard Error of Measurement in Prediction.

    ERIC Educational Resources Information Center

    Woodruff, David

    1990-01-01

    A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…

  13. Prediction with measurement errors in finite populations

    PubMed Central

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2011-01-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors. PMID:22162621

  14. Comprehensive fluence model for absolute portal dose image prediction.

    PubMed

    Chytyk, K; McCurdy, B M C

    2009-04-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1 x 1 to 20 x 20 cm2) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  15. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  16. Dopamine neurons share common response function for reward prediction error

    PubMed Central

    Eshel, Neir; Tian, Ju; Bukwich, Michael; Uchida, Naoshige

    2016-01-01

    Dopamine neurons are thought to signal reward prediction error, or the difference between actual and predicted reward. How dopamine neurons jointly encode this information, however, remains unclear. One possibility is that different neurons specialize in different aspects of prediction error; another is that each neuron calculates prediction error in the same way. We recorded from optogenetically-identified dopamine neurons in the lateral ventral tegmental area (VTA) while mice performed classical conditioning tasks. Our tasks allowed us to determine the full prediction error functions of dopamine neurons and compare them to each other. We found striking homogeneity among individual dopamine neurons: their responses to both unexpected and expected rewards followed the same function, just scaled up or down. As a result, we could describe both individual and population responses using just two parameters. Such uniformity ensures robust information coding, allowing each dopamine neuron to contribute fully to the prediction error signal. PMID:26854803

  17. The Impact of Covariate Measurement Error on Risk Prediction

    PubMed Central

    Khudyakov, Polyna; Gorfine, Malka; Zucker, David; Spiegelman, Donna

    2015-01-01

    In the development of risk prediction models, predictors are often measured with error. In this paper, we investigate the impact of covariate measurement error on risk prediction. We compare the prediction performance using a costly variable measured without error, along with error-free covariates, to that of a model based on an inexpensive surrogate along with the error-free covariates. We consider continuous error-prone covariates with homoscedastic and heteroscedastic errors, and also a discrete misclassified covariate. Prediction performance is evaluated by the area under the receiver operating characteristic curve (AUC), the Brier score (BS), and the ratio of the observed to the expected number of events (calibration). In an extensive numerical study, we show that (i) the prediction model with the error-prone covariate is very well calibrated, even when it is mis-specified; (ii) using the error-prone covariate instead of the true covariate can reduce the AUC and increase the BS dramatically; (iii) adding an auxiliary variable, which is correlated with the error-prone covariate but conditionally independent of the outcome given all covariates in the true model, can improve the AUC and BS substantially. We conclude that reducing measurement error in covariates will improve the ensuing risk prediction, unless the association between the error-free and error-prone covariates is very high. Finally, we demonstrate how a validation study can be used to assess the effect of mismeasured covariates on risk prediction. These concepts are illustrated in a breast cancer risk prediction model developed in the Nurses’ Health Study. PMID:25865315

  18. On the construction of the prediction error covariance matrix

    SciTech Connect

    Waseda, T; Jameson, L; Yaremchuk, M; Mitsudera, H

    2001-02-02

    Implementation of a full Kalman filtering scheme in a large OGCM is unrealistic without simplification and one generally reduces the degrees of freedom of the system by prescribing the structure of the prediction error. However, reductions are often made without any objective measure of their appropriateness. In this report, we present results from an ongoing effort to best construct the prediction error capturing the essential ingredients of the system error that includes both a correlated (global) error and a relatively uncorrelated (local) error. The former will be captured by an EOF modes of the model variance whereas the latter can be detected by wavelet analysis.

  19. Prediction of Absolute Solvation Free Energies using Molecular Dynamics Free Energy Perturbation and the OPLS Force Field.

    PubMed

    Shivakumar, Devleena; Williams, Joshua; Wu, Yujie; Damm, Wolfgang; Shelley, John; Sherman, Woody

    2010-05-11

    The accurate prediction of protein-ligand binding free energies is a primary objective in computer-aided drug design. The solvation free energy of a small molecule provides a surrogate to the desolvation of the ligand in the thermodynamic process of protein-ligand binding. Here, we use explicit solvent molecular dynamics free energy perturbation to predict the absolute solvation free energies of a set of 239 small molecules, spanning diverse chemical functional groups commonly found in drugs and drug-like molecules. We also compare the performance of absolute solvation free energies obtained using the OPLS_2005 force field with two other commonly used small molecule force fields-general AMBER force field (GAFF) with AM1-BCC charges and CHARMm-MSI with CHelpG charges. Using the OPLS_2005 force field, we obtain high correlation with experimental solvation free energies (R(2) = 0.94) and low average unsigned errors for a majority of the functional groups compared to AM1-BCC/GAFF or CHelpG/CHARMm-MSI. However, OPLS_2005 has errors of over 1.3 kcal/mol for certain classes of polar compounds. We show that predictions on these compound classes can be improved by using a semiempirical charge assignment method with an implicit bond charge correction.

  20. Predicting Error Bars for QSAR Models

    SciTech Connect

    Schroeter, Timon; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-09-18

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  1. Predicting Error Bars for QSAR Models

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

  2. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire R.; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  3. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    SciTech Connect

    Gustafson, William I.; Yu, Shaocai

    2012-10-23

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.

  4. Determination and Modeling of Error Densities in Ephemeris Prediction

    SciTech Connect

    Jones, J.P.; Beckerman, M.

    1999-02-07

    The authors determined error densities of ephemeris predictions for 14 LEO satellites. The empirical distributions are not inconsistent with the hypothesis of a Gaussian distribution. The growth rate of radial errors are most highly correlated with eccentricity ({vert_bar}r{vert_bar} = 0.63, {alpha} < 0.05). The growth rate of along-track errors is most highly correlated with the decay rate of the semimajor axis ({vert_bar}r{vert_bar} = 0.97; {alpha} < 0.01).

  5. Temporal prediction errors modulate cingulate-insular coupling.

    PubMed

    Limongi, Roberto; Sutherland, Steven C; Zhu, Jian; Young, Michael E; Habib, Reza

    2013-05-01

    Prediction error (i.e., the difference between the expected and the actual event's outcome) mediates adaptive behavior. Activity in the anterior mid-cingulate cortex (aMCC) and in the anterior insula (aINS) is associated with the commission of prediction errors under uncertainty. We propose a dynamic causal model of effective connectivity (i.e., neuronal coupling) between the aMCC, the aINS, and the striatum in which the task context drives activity in the aINS and the temporal prediction errors modulate extrinsic cingulate-insular connections. With functional magnetic resonance imaging, we scanned 15 participants when they performed a temporal prediction task. They observed visual animations and predicted when a stationary ball began moving after being contacted by another moving ball. To induced uncertainty-driven prediction errors, we introduced spatial gaps and temporal delays between the balls. Classical and Bayesian fMRI analyses provided evidence to support that the aMCC-aINS system along with the striatum not only responds when humans predict whether a dynamic event occurs but also when it occurs. Our results reveal that the insula is the entry port of a three-region pathway involved in the processing of temporal predictions. Moreover, prediction errors rather than attentional demands, task difficulty, or task duration exert an influence in the aMCC-aINS system. Prediction errors debilitate the effect of the aMCC on the aINS. Finally, our computational model provides a way forward to characterize the physiological parallel of temporal prediction errors elicited in dynamic tasks.

  6. Error-related negativity reflects detection of negative reward prediction error.

    PubMed

    Yasuda, Asako; Sato, Atsushi; Miyawaki, Kaori; Kumano, Hiroaki; Kuboki, Tomifusa

    2004-11-15

    Error-related negativity (ERN) is a negative deflection in the event-related potential elicited in error trials. To examine the function of ERN, we performed an experiment in which two within-participants factors were manipulated: outcome uncertainty and content of feedback. The ERN was largest when participants expected correct feedback but received error feedback. There were significant positive correlations between the ERN amplitude and the rate of response switching in the subsequent trial, and between the ERN amplitude and the trait version score on negative affect scale. These results suggest that ERN reflects detection of a negative reward prediction error and promotes subsequent response switching, and that individuals with high negative affect are hypersensitive to a negative reward prediction error.

  7. Characterizing Complex Time Series from the Scaling of Prediction Error.

    NASA Astrophysics Data System (ADS)

    Hinrichs, Brant Eric

    This thesis concerns characterizing complex time series from the scaling of prediction error. We use the global modeling technique of radial basis function approximation to build models from a state-space reconstruction of a time series that otherwise appears complicated or random (i.e. aperiodic, irregular). Prediction error as a function of prediction horizon is obtained from the model using the direct method. The relationship between the underlying dynamics of the time series and the logarithmic scaling of prediction error as a function of prediction horizon is investigated. We use this relationship to characterize the dynamics of both a model chaotic system and physical data from the optic tectum of an attentive pigeon exhibiting the important phenomena of nonstationary neuronal oscillations in response to visual stimuli.

  8. Prediction of absolute infrared intensities for the fundamental vibrations of H2O2

    NASA Technical Reports Server (NTRS)

    Rogers, J. D.; Hillman, J. J.

    1981-01-01

    Absolute infrared intensities are predicted for the vibrational bands of gas-phase H2O2 by the use of a hydrogen atomic polar tensor transferred from the hydroxyl hydrogen atom of CH3OH. These predicted intensities are compared with intensities predicted by the use of a hydrogen atomic polar tensor transferred from H2O. The predicted relative intensities agree well with published spectra of gas-phase H2O2, and the predicted absolute intensities are expected to be accurate to within at least a factor of two. Among the vibrational degrees of freedom, the antisymmetric O-H bending mode nu(6) is found to be the strongest with a calculated intensity of 60.5 km/mole. The torsional band, a consequence of hindered rotation, is found to be the most intense fundamental with a predicted intensity of 120 km/mole. These results are compared with the recent absolute intensity determinations for the nu(6) band.

  9. Prediction error, ketamine and psychosis: An updated model

    PubMed Central

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-01-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms – which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. PMID:27226342

  10. Efficient Reduction and Analysis of Model Predictive Error

    NASA Astrophysics Data System (ADS)

    Doherty, J.

    2006-12-01

    Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often

  11. Differential neural mechanisms for early and late prediction error detection

    PubMed Central

    Malekshahi, Rahim; Seth, Anil; Papanikolaou, Amalia; Mathews, Zenon; Birbaumer, Niels; Verschure, Paul F. M. J.; Caria, Andrea

    2016-01-01

    Emerging evidence indicates that prediction, instantiated at different perceptual levels, facilitate visual processing and enable prompt and appropriate reactions. Until now, the mechanisms underlying the effect of predictive coding at different stages of visual processing have still remained unclear. Here, we aimed to investigate early and late processing of spatial prediction violation by performing combined recordings of saccadic eye movements and fast event-related fMRI during a continuous visual detection task. Psychophysical reverse correlation analysis revealed that the degree of mismatch between current perceptual input and prior expectations is mainly processed at late rather than early stage, which is instead responsible for fast but general prediction error detection. Furthermore, our results suggest that conscious late detection of deviant stimuli is elicited by the assessment of prediction error’s extent more than by prediction error per se. Functional MRI and functional connectivity data analyses indicated that higher-level brain systems interactions modulate conscious detection of prediction error through top-down processes for the analysis of its representational content, and possibly regulate subsequent adaptation of predictive models. Overall, our experimental paradigm allowed to dissect explicit from implicit behavioral and neural responses to deviant stimuli in terms of their reliance on predictive models. PMID:27079423

  12. Principal components analysis of reward prediction errors in a reinforcement learning task.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2016-01-01

    Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found.

  13. Arithmetic and local circuitry underlying dopamine prediction errors

    PubMed Central

    Eshel, Neir; Bukwich, Michael; Rao, Vinod; Hemmelder, Vivian; Tian, Ju; Uchida, Naoshige

    2015-01-01

    Dopamine neurons are thought to facilitate learning by comparing actual and expected reward1,2. Despite two decades of investigation, little is known about how this comparison is made. To determine how dopamine neurons calculate prediction error, we combined optogenetic manipulations with extracellular recordings in the ventral tegmental area (VTA) while mice engaged in classical conditioning. By manipulating the temporal expectation of reward, we demonstrate that dopamine neurons perform subtraction, a computation that is ideal for reinforcement learning but rarely observed in the brain. Furthermore, selectively exciting and inhibiting neighbouring GABA neurons in the VTA reveals that these neurons are a source of subtraction: they inhibit dopamine neurons when reward is expected, causally contributing to prediction error calculations. Finally, bilaterally stimulating VTA GABA neurons dramatically reduces anticipatory licking to conditioned odours, consistent with an important role for these neurons in reinforcement learning. Together, our results uncover the arithmetic and local circuitry underlying dopamine prediction errors. PMID:26322583

  14. Error prediction for probes guided by means of fixtures

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, J. Michael

    2012-02-01

    Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.

  15. Recovery of absolute phases for the fringe patterns of three selected wavelengths with improved anti-error capability

    NASA Astrophysics Data System (ADS)

    Long, Jiale; Xi, Jiangtao; Zhang, Jianmin; Zhu, Ming; Cheng, Wenqing; Li, Zhongwei; Shi, Yusheng

    2016-09-01

    In a recent published work, we proposed a technique to recover the absolute phase maps of fringe patterns with two selected fringe wavelengths. To achieve higher anti-error capability, the proposed method requires employing the fringe patterns with longer wavelengths; however, longer wavelength may lead to the degradation of the signal-to-noise ratio (SNR) in the surface measurement. In this paper, we propose a new approach to unwrap the phase maps from their wrapped versions based on the use of fringes with three different wavelengths which is characterized by improved anti-error capability and SNR. Therefore, while the previous method works on the two-phase maps obtained from six-step phase-shifting profilometry (PSP) (thus 12 fringe patterns are needed), the proposed technique performs very well on three-phase maps from three steps PSP, requiring only nine fringe patterns and hence more efficient. Moreover, the advantages of the two-wavelength method in simple implementation and flexibility in the use of fringe patterns are also reserved. Theoretical analysis and experiment results are presented to confirm the effectiveness of the proposed method.

  16. Disrupted prediction errors index social deficits in autism spectrum disorder.

    PubMed

    Balsters, Joshua H; Apps, Matthew A J; Bolis, Dimitris; Lehner, Rea; Gallagher, Louise; Wenderoth, Nicole

    2017-01-01

    Social deficits are a core symptom of autism spectrum disorder; however, the perturbed neural mechanisms underpinning these deficits remain unclear. It has been suggested that social prediction errors-coding discrepancies between the predicted and actual outcome of another's decisions-might play a crucial role in processing social information. While the gyral surface of the anterior cingulate cortex signalled social prediction errors in typically developing individuals, this crucial social signal was altered in individuals with autism spectrum disorder. Importantly, the degree to which social prediction error signalling was aberrant correlated with diagnostic measures of social deficits. Effective connectivity analyses further revealed that, in typically developing individuals but not in autism spectrum disorder, the magnitude of social prediction errors was driven by input from the ventromedial prefrontal cortex. These data provide a novel insight into the neural substrates underlying autism spectrum disorder social symptom severity, and further research into the gyral surface of the anterior cingulate cortex and ventromedial prefrontal cortex could provide more targeted therapies to help ameliorate social deficits in autism spectrum disorder.

  17. How Prediction Errors Shape Perception, Attention, and Motivation

    PubMed Central

    den Ouden, Hanneke E. M.; Kok, Peter; de Lange, Floris P.

    2012-01-01

    Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise. PMID:23248610

  18. Disrupted prediction errors index social deficits in autism spectrum disorder

    PubMed Central

    Apps, Matthew A. J.; Bolis, Dimitris; Lehner, Rea; Gallagher, Louise; Wenderoth, Nicole

    2017-01-01

    Abstract Social deficits are a core symptom of autism spectrum disorder; however, the perturbed neural mechanisms underpinning these deficits remain unclear. It has been suggested that social prediction errors—coding discrepancies between the predicted and actual outcome of another’s decisions—might play a crucial role in processing social information. While the gyral surface of the anterior cingulate cortex signalled social prediction errors in typically developing individuals, this crucial social signal was altered in individuals with autism spectrum disorder. Importantly, the degree to which social prediction error signalling was aberrant correlated with diagnostic measures of social deficits. Effective connectivity analyses further revealed that, in typically developing individuals but not in autism spectrum disorder, the magnitude of social prediction errors was driven by input from the ventromedial prefrontal cortex. These data provide a novel insight into the neural substrates underlying autism spectrum disorder social symptom severity, and further research into the gyral surface of the anterior cingulate cortex and ventromedial prefrontal cortex could provide more targeted therapies to help ameliorate social deficits in autism spectrum disorder. PMID:28031223

  19. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  20. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    PubMed

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  1. Dopamine neurons encode errors in predicting movement trigger occurrence

    PubMed Central

    Pasquereau, Benjamin

    2014-01-01

    The capacity to anticipate the timing of events in a dynamic environment allows us to optimize the processes necessary for perceiving, attending to, and responding to them. Such anticipation requires neuronal mechanisms that track the passage of time and use this representation, combined with prior experience, to estimate the likelihood that an event will occur (i.e., the event's “hazard rate”). Although hazard-like ramps in activity have been observed in several cortical areas in preparation for movement, it remains unclear how such time-dependent probabilities are estimated to optimize response performance. We studied the spiking activity of dopamine neurons in the substantia nigra pars compacta of monkeys during an arm-reaching task for which the foreperiod preceding the “go” signal varied randomly along a uniform distribution. After extended training, the monkeys' reaction times correlated inversely with foreperiod duration, reflecting a progressive anticipation of the go signal according to its hazard rate. Many dopamine neurons modulated their firing rates as predicted by a succession of hazard-related prediction errors. First, as time passed during the foreperiod, slowly decreasing anticipatory activity tracked the elapsed time as if encoding negative prediction errors. Then, when the go signal appeared, a phasic response encoded the temporal unpredictability of the event, consistent with a positive prediction error. Neither the anticipatory nor the phasic signals were affected by the anticipated magnitudes of future reward or effort, or by parameters of the subsequent movement. These results are consistent with the notion that dopamine neurons encode hazard-related prediction errors independently of other information. PMID:25411459

  2. Damage Assessment Using Hyperchaotic Excitation and Nonlinear Prediction Error

    DTIC Science & Technology

    2011-09-01

    include auto-prediction error [5], Shahab Torkamani and Eric A Butcher, Department of Mechanical and Aerospace Engineering, New Mexico State...0085, USA Gyuhae Park, Los Alamos National Laboratory, MS T001, P.O. Box 1663, Los Alamos, NM 87545, USA 1 2 3 Report Documentation Page Form...with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE SEP 2011 2. REPORT TYPE N/A 3

  3. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  4. The Representation of Prediction Error in Auditory Cortex

    PubMed Central

    Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali

    2016-01-01

    To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251

  5. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  6. Heavy-tailed prediction error: a difficulty in predicting biomedical signals of 1/f noise type.

    PubMed

    Li, Ming; Zhao, Wei; Chen, Biao

    2012-01-01

    A fractal signal x(t) in biomedical engineering may be characterized by 1/f noise, that is, the power spectrum density (PSD) divergences at f = 0. According the Taqqu's law, 1/f noise has the properties of long-range dependence and heavy-tailed probability density function (PDF). The contribution of this paper is to exhibit that the prediction error of a biomedical signal of 1/f noise type is long-range dependent (LRD). Thus, it is heavy-tailed and of 1/f noise. Consequently, the variance of the prediction error is usually large or may not exist, making predicting biomedical signals of 1/f noise type difficult.

  7. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  8. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  9. QUANTIFIERS UNDONE: REVERSING PREDICTABLE SPEECH ERRORS IN COMPREHENSION

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2015-01-01

    Speakers predictably make errors during spontaneous speech. Listeners may identify such errors and repair the input, or their analysis of the input, accordingly. Two written questionnaire studies investigated error compensation mechanisms in sentences with doubled quantifiers such as Many students often turn in their assignments late. Results show a considerable number of undoubled interpretations for all items tested (though fewer for sentences containing doubled negation than for sentences containing many-often, every-always or few-seldom.) This evidence shows that the compositional form-meaning pairing supplied by the grammar is not the only systematic mapping between form and meaning. Implicit knowledge of the workings of the performance systems provides an additional mechanism for pairing sentence form and meaning. Alternate accounts of the data based on either a concord interpretation or an emphatic interpretation of the doubled quantifier don’t explain why listeners fail to apprehend the ‘extra meaning’ added by the potentially redundant material only in limited circumstances. PMID:26478637

  10. Prediction of Absolute Hydroxyl pKa Values for 3-Hydroxypyridin-4-ones.

    PubMed

    Chen, Yu-Lin; Doltsinis, Nikos L; Hider, Robert C; Barlow, Dave J

    2012-10-18

    pKa values have been calculated for a series of 3-hydroxypyridin-4-one (HPO) chelators in aqueous solution using coordination constrained ab initio molecular dynamics (AIMD) in combination with thermodynamic integration. This dynamics-based methodology in which the solvent is treated explicitly at the ab initio level has been compared with more commonly used simple, static, approaches. Comparison with experimental numbers has confirmed that the AIMD-based approach predicts the correct trend in the pKa values and produces the lowest average error (∼0.3 pKa units). The corresponding pKa predictions made via static quantum mechanical calculations overestimate the pKa values by 0.3-7 pKa units, with the extent of error dependent on the choice of thermodynamic cycle employed. The use of simple quantitative structure property relationship methods gives prediction errors of 0.3-1 pKa units, with some values overestimated and some underestimated. Beyond merely calculating pKa values, the AIMD simulations provide valuable additional insight into the atomistic details of the proton transfer mechanism and the solvation structure and dynamics at all stages of the reaction. For all HPOs studied, it is seen that proton transfer takes place along a chain of three H2O molecules, although direct hydrogen bonds are seen to form transiently. Analysis of the solvation structure before and after the proton transfer event using radial pair distribution functions and integrated number densities suggests that the trends in the pKa values correlate with the strength of the hydrogen bond and the average number of solvent molecules in the vicinity of the donor oxygen.

  11. Stress increases aversive prediction error signal in the ventral striatum.

    PubMed

    Robinson, Oliver J; Overstreet, Cassie; Charney, Danielle R; Vytal, Katherine; Grillon, Christian

    2013-03-05

    From job interviews to the heat of battle, it is evident that people think and learn differently when stressed. In fact, learning under stress may have long-term consequences; stress facilitates aversive conditioning and associations learned during extreme stress may result in debilitating emotional responses in posttraumatic stress disorder. The mechanisms underpinning such stress-related associations, however, are unknown. Computational neuroscience has successfully characterized several mechanisms critical for associative learning under normative conditions. One such mechanism, the detection of a mismatch between expected and observed outcomes within the ventral striatum (i.e., "prediction errors"), is thought to be a critical precursor to the formation of new stimulus-outcome associations. An untested possibility, therefore, is that stress may affect learning via modulation of this mechanism. Here we combine a translational model of stress with a cognitive neuroimaging paradigm to demonstrate that stress significantly increases ventral striatum aversive (but not appetitive) prediction error signal. This provides a unique account of the propensity to form threat-related associations under stress with direct implications for our understanding of both normal stress and stress-related disorders.

  12. Chasing probabilities - Signaling negative and positive prediction errors across domains.

    PubMed

    Meder, David; Madsen, Kristoffer H; Hulme, Oliver; Siebner, Hartwig R

    2016-07-01

    Adaptive actions build on internal probabilistic models of possible outcomes that are tuned according to the errors of their predictions when experiencing an actual outcome. Prediction errors (PEs) inform choice behavior across a diversity of outcome domains and dimensions, yet neuroimaging studies have so far only investigated such signals in singular experimental contexts. It is thus unclear whether the neuroanatomical distribution of PE encoding reported previously pertains to computational features that are invariant with respect to outcome valence, sensory domain, or some combination of the two. We acquired functional MRI data while volunteers performed four probabilistic reversal learning tasks which differed in terms of outcome valence (reward-seeking versus punishment-avoidance) and domain (abstract symbols versus facial expressions) of outcomes. We found that ventral striatum and frontopolar cortex coded increasingly positive PEs, whereas dorsal anterior cingulate cortex (dACC) traced increasingly negative PEs, irrespectively of the outcome dimension. Individual reversal behavior was unaffected by context manipulations and was predicted by activity in dACC and right inferior frontal gyrus (IFG). The stronger the response to negative PEs in these areas, the lower was the tendency to reverse choice behavior in response to negative events, suggesting that these regions enforce a rule-based strategy across outcome dimensions. Outcome valence influenced PE-related activity in left amygdala, IFG, and dorsomedial prefrontal cortex, where activity selectively scaled with increasingly positive PEs in the reward-seeking but not punishment-avoidance context, irrespective of sensory domain. Left amygdala displayed an additional influence of sensory domain. In the context of avoiding punishment, amygdala activity increased with increasingly negative PEs, but only for facial stimuli, indicating an integration of outcome valence and sensory domain during probabilistic

  13. Error correction, sensory prediction, and adaptation in motor control.

    PubMed

    Shadmehr, Reza; Smith, Maurice A; Krakauer, John W

    2010-01-01

    Motor control is the study of how organisms make accurate goal-directed movements. Here we consider two problems that the motor system must solve in order to achieve such control. The first problem is that sensory feedback is noisy and delayed, which can make movements inaccurate and unstable. The second problem is that the relationship between a motor command and the movement it produces is variable, as the body and the environment can both change. A solution is to build adaptive internal models of the body and the world. The predictions of these internal models, called forward models because they transform motor commands into sensory consequences, can be used to both produce a lifetime of calibrated movements, and to improve the ability of the sensory system to estimate the state of the body and the world around it. Forward models are only useful if they produce unbiased predictions. Evidence shows that forward models remain calibrated through motor adaptation: learning driven by sensory prediction errors.

  14. Temporal prediction errors modulate task-switching performance

    PubMed Central

    Limongi, Roberto; Silva, Angélica M.; Góngora-Costa, Begoña

    2015-01-01

    We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus’ onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as “executive control” is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching. PMID:26379568

  15. Temporal prediction errors modulate task-switching performance.

    PubMed

    Limongi, Roberto; Silva, Angélica M; Góngora-Costa, Begoña

    2015-01-01

    We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus' onset times) modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI), causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected), which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad hoc concepts such as "executive control" is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty) and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching.

  16. Striatal prediction errors support dynamic control of declarative memory decisions

    PubMed Central

    Scimeca, Jason M.; Katzman, Perri L.; Badre, David

    2016-01-01

    Adaptive memory requires context-dependent control over how information is retrieved, evaluated and used to guide action, yet the signals that drive adjustments to memory decisions remain unknown. Here we show that prediction errors (PEs) coded by the striatum support control over memory decisions. Human participants completed a recognition memory test that incorporated biased feedback to influence participants' recognition criterion. Using model-based fMRI, we find that PEs—the deviation between the outcome and expected value of a memory decision—correlate with striatal activity and predict individuals' final criterion. Importantly, the striatal PEs are scaled relative to memory strength rather than the expected trial outcome. Follow-up experiments show that the learned recognition criterion transfers to free recall, and targeting biased feedback to experimentally manipulate the magnitude of PEs influences criterion consistent with PEs scaled relative to memory strength. This provides convergent evidence that declarative memory decisions can be regulated via striatally mediated reinforcement learning signals. PMID:27713407

  17. Predicted Residual Error Sum of Squares of Mixed Models: An Application for Genomic Prediction

    PubMed Central

    Xu, Shizhong

    2017-01-01

    Genomic prediction is a statistical method to predict phenotypes of polygenic traits using high-throughput genomic data. Most diseases and behaviors in humans and animals are polygenic traits. The majority of agronomic traits in crops are also polygenic. Accurate prediction of these traits can help medical professionals diagnose acute diseases and breeders to increase food products, and therefore significantly contribute to human health and global food security. The best linear unbiased prediction (BLUP) is an important tool to analyze high-throughput genomic data for prediction. However, to judge the efficacy of the BLUP model with a particular set of predictors for a given trait, one has to provide an unbiased mechanism to evaluate the predictability. Cross-validation (CV) is an essential tool to achieve this goal, where a sample is partitioned into K parts of roughly equal size, one part is predicted using parameters estimated from the remaining K – 1 parts, and eventually every part is predicted using a sample excluding that part. Such a CV is called the K-fold CV. Unfortunately, CV presents a substantial increase in computational burden. We developed an alternative method, the HAT method, to replace CV. The new method corrects the estimated residual errors from the whole sample analysis using the leverage values of a hat matrix of the random effects to achieve the predicted residual errors. Properties of the HAT method were investigated using seven agronomic and 1000 metabolomic traits of an inbred rice population. Results showed that the HAT method is a very good approximation of the CV method. The method was also applied to 10 traits in 1495 hybrid rice with 1.6 million SNPs, and to human height of 6161 subjects with roughly 0.5 million SNPs of the Framingham heart study data. Predictabilities of the HAT and CV methods were all similar. The HAT method allows us to easily evaluate the predictabilities of genomic prediction for large numbers of traits in

  18. Perceptual learning of degraded speech by minimizing prediction error

    PubMed Central

    Sohoglu, Ediz

    2016-01-01

    Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech. PMID:26957596

  19. The Attraction Effect Modulates Reward Prediction Errors and Intertemporal Choices.

    PubMed

    Gluth, Sebastian; Hotaling, Jared M; Rieskamp, Jörg

    2017-01-11

    Classical economic theory contends that the utility of a choice option should be independent of other options. This view is challenged by the attraction effect, in which the relative preference between two options is altered by the addition of a third, asymmetrically dominated option. Here, we leveraged the attraction effect in the context of intertemporal choices to test whether both decisions and reward prediction errors (RPE) in the absence of choice violate the independence of irrelevant alternatives principle. We first demonstrate that intertemporal decision making is prone to the attraction effect in humans. In an independent group of participants, we then investigated how this affects the neural and behavioral valuation of outcomes using a novel intertemporal lottery task and fMRI. Participants' behavioral responses (i.e., satisfaction ratings) were modulated systematically by the attraction effect and this modulation was correlated across participants with the respective change of the RPE signal in the nucleus accumbens. Furthermore, we show that, because exponential and hyperbolic discounting models are unable to account for the attraction effect, recently proposed sequential sampling models might be more appropriate to describe intertemporal choices. Our findings demonstrate for the first time that the attraction effect modulates subjective valuation even in the absence of choice. The findings also challenge the prospect of using neuroscientific methods to measure utility in a context-free manner and have important implications for theories of reinforcement learning and delay discounting.

  20. Seasonal prediction of Indian summer monsoon rainfall in NCEP CFSv2: forecast and predictability error

    NASA Astrophysics Data System (ADS)

    Pokhrel, Samir; Saha, Subodh Kumar; Dhakate, Ashish; Rahman, Hasibur; Chaudhari, Hemantkumar S.; Salunke, Kiran; Hazra, Anupam; Sujith, K.; Sikka, D. R.

    2016-04-01

    A detailed analysis of sensitivity to the initial condition for the simulation of the Indian summer monsoon using retrospective forecast by the latest version of the Climate Forecast System version-2 (CFSv2) is carried out. This study primarily focuses on the tropical region of Indian and Pacific Ocean basin, with special emphasis on the Indian land region. The simulated seasonal mean and the inter-annual standard deviations of rainfall, upper and lower level atmospheric circulations and Sea Surface Temperature (SST) tend to be more skillful as the lead forecast time decreases (5 month lead to 0 month lead time i.e. L5-L0). In general spatial correlation (bias) increases (decreases) as forecast lead time decreases. This is further substantiated by their averaged value over the selected study regions over the Indian and Pacific Ocean basins. The tendency of increase (decrease) of model bias with increasing (decreasing) forecast lead time also indicates the dynamical drift of the model. Large scale lower level circulation (850 hPa) shows enhancement of anomalous westerlies (easterlies) over the tropical region of the Indian Ocean (Western Pacific Ocean), which indicates the enhancement of model error with the decrease in lead time. At the upper level circulation (200 hPa) biases in both tropical easterly jet and subtropical westerlies jet tend to decrease as the lead time decreases. Despite enhancement of the prediction skill, mean SST bias seems to be insensitive to the initialization. All these biases are significant and together they make CFSv2 vulnerable to seasonal uncertainties in all the lead times. Overall the zeroth lead (L0) seems to have the best skill, however, in case of Indian summer monsoon rainfall (ISMR), the 3 month lead forecast time (L3) has the maximum ISMR prediction skill. This is valid using different independent datasets, wherein these maximum skill scores are 0.64, 0.42 and 0.57 with respect to the Global Precipitation Climatology Project

  1. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  2. Predicting pilot error: testing a new methodology and a multi-methods and analysts approach.

    PubMed

    Stanton, Neville A; Salmon, Paul; Harris, Don; Marshall, Andrew; Demagalski, Jason; Young, Mark S; Waldmann, Thomas; Dekker, Sidney

    2009-05-01

    The Human Error Template (HET) is a recently developed methodology for predicting design-induced pilot error. This article describes a validation study undertaken to compare the performance of HET against three contemporary Human Error Identification (HEI) approaches when used to predict pilot errors for an approach and landing task and also to compare analyst error predictions to an approach to enhancing error prediction sensitivity: the multiple analysts and methods approach, whereby multiple analyst predictions using a range of HEI techniques are pooled. The findings indicate that, of the four methodologies used in isolation, analysts using the HET methodology offered the most accurate error predictions, and also that the multiple analysts and methods approach was more successful overall in terms of error prediction sensitivity than the three other methods but not the HET approach. The results suggest that when predicting design-induced error, it is appropriate to use a toolkit of different HEI approaches and multiple analysts in order to heighten error prediction sensitivity.

  3. Method of excess fractions with application to absolute distance metrology: wavelength selection and the effects of common error sources.

    PubMed

    Falaggis, Konstantinos; Towers, David P; Towers, Catherine E

    2012-09-20

    Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.

  4. A Predictive Approach to Eliminating Errors in Software Code

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA s Metrics Data Program Data Repository is a database that stores problem, product, and metrics data. The primary goal of this data repository is to provide project data to the software community. In doing so, the Metrics Data Program collects artifacts from a large NASA dataset, generates metrics on the artifacts, and then generates reports that are made available to the public at no cost. The data that are made available to general users have been sanitized and authorized for publication through the Metrics Data Program Web site by officials representing the projects from which the data originated. The data repository is operated by NASA s Independent Verification and Validation (IV&V) Facility, which is located in Fairmont, West Virginia, a high-tech hub for emerging innovation in the Mountain State. The IV&V Facility was founded in 1993, under the NASA Office of Safety and Mission Assurance, as a direct result of recommendations made by the National Research Council and the Report of the Presidential Commission on the Space Shuttle Challenger Accident. Today, under the direction of Goddard Space Flight Center, the IV&V Facility continues its mission to provide the highest achievable levels of safety and cost-effectiveness for mission-critical software. By extending its data to public users, the facility has helped improve the safety, reliability, and quality of complex software systems throughout private industry and other government agencies. Integrated Software Metrics, Inc., is one of the organizations that has benefited from studying the metrics data. As a result, the company has evolved into a leading developer of innovative software-error prediction tools that help organizations deliver better software, on time and on budget.

  5. The ratio of absolute lymphocyte count at interim of therapy to absolute lymphocyte count at diagnosis predicts survival in childhood B-lineage acute lymphoblastic leukemia.

    PubMed

    Cheng, Yuping; Luo, Zebin; Yang, Shilong; Jia, Ming; Zhao, Haizhao; Xu, Weiqun; Tang, Yongmin

    2015-02-01

    Absolute lymphocyte count (ALC) after therapy has been reported to be an independent prognostic factor for clinical outcome in leukemia. This study mainly analyzed ALC at interim of therapy on day 22 (ALC-22) and the ratio of ALC-22 to ALC at diagnosis (ALC-0) on the impact of survival and the relation of ALC to lymphocyte subsets in 119 pediatric B-lineage acute lymphoblastic leukemia (B-ALL) patients. Univariate analysis revealed that ALC-22/ALC-0 ratio <10% was significantly associated with inferior overall survival (OS) (hazard ratio (HR)=12.24, P=0.0014) and event-free survival (EFS) (HR=3.3, P=0.0046). In multivariate analysis, ALC-22/ALC-0 ratio remained an independent prognostic factor for OS (HR=6.92, P=0.0181) and EFS (HR=2.78, P=0.0329) after adjusting for age, white blood cell (WBC) count and minimal residual disease (MRD) status. A Spearman correlation test showed that CD3+ T cells had a negative correlation with ALC-0 (r=-0.7204, P<0.0001) and a positive correlation with ALC-22 (r=0.5061, P=0.0071). These data suggest that ALC-22/ALC-0 ratio may serve as a more effective biomarker to predict survival in pediatric B-ALL and ALC is mainly associated with CD3+ T cells.

  6. Brief optogenetic inhibition of dopamine neurons mimics endogenous negative reward prediction errors

    PubMed Central

    Chang, Chun Yun; Esber, Guillem R; Marrero-Garcia, Yasmin; Yau, Hau-Jie; Bonci, Antonello; Schoenbaum, Geoffrey

    2015-01-01

    Correlative studies have strongly linked phasic changes in dopamine activity with reward prediction error signaling. But causal evidence that these brief changes in firing actually serve as error signals to drive associative learning is more tenuous. While there is direct evidence that brief increases can substitute for positive prediction errors, there is no comparable evidence that similarly brief pauses can substitute for negative prediction errors. Lacking such evidence, the effect of increases in firing could reflect novelty or salience, variables also correlated with dopamine activity. Here we provide such evidence, showing in a modified Pavlovian over-expectation task that brief pauses in the firing of dopamine neurons in rat ventral tegmental area at the time of reward are sufficient to mimic the effects of endogenous negative prediction errors. These results support the proposal that brief changes in the firing of dopamine neurons serve as full-fledged bidirectional prediction error signals. PMID:26642092

  7. Comparison of various error functions in predicting the optimum isotherm by linear and non-linear regression analysis for the sorption of basic red 9 by activated carbon.

    PubMed

    Kumar, K Vasanth; Porkodi, K; Rocha, F

    2008-01-15

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.

  8. Hierarchical Learning Induces Two Simultaneous, But Separable, Prediction Errors in Human Basal Ganglia

    PubMed Central

    Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew

    2013-01-01

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092

  9. Synergies in Astrometry: Predicting Navigational Error of Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Gessner Stewart, Susan

    2015-08-01

    Celestial navigation can employ a number of bright stars which are in binary systems. Often these are unresolved, appearing as a single, center-of-light object. A number of these systems are, however, in wide systems which could introduce a margin of error in the navigation solution if not handled properly. To illustrate the importance of good orbital solutions for binary systems - as well as good astrometry in general - the relationship between the center-of-light versus individual catalog position of celestial bodies and the error in terrestrial position derived via celestial navigation is demonstrated. From the list of navigational binary stars, fourteen such binary systems with at least 3.0 arcseconds apparent separation are explored. Maximum navigational error is estimated under the assumption that the bright star in the pair is observed at maximum separation, but the center-of-light is employed in the navigational solution. The relationships between navigational error and separation, orbital periods, and observers' latitude are discussed.

  10. Frontal theta links prediction errors to behavioral adaptation in reinforcement learning.

    PubMed

    Cavanagh, James F; Frank, Michael J; Klein, Theresa J; Allen, John J B

    2010-02-15

    Investigations into action monitoring have consistently detailed a frontocentral voltage deflection in the event-related potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the feedback-related negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single-trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single-trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Mediofrontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single-trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations, with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice.

  11. Predictive vegetation modeling for conservation: impact of error propagation from digital elevation data.

    PubMed

    Van Niel, Kimberly P; Austin, Mike P

    2007-01-01

    The effect of digital elevation model (DEM) error on environmental variables, and subsequently on predictive habitat models, has not been explored. Based on an error analysis of a DEM, multiple error realizations of the DEM were created and used to develop both direct and indirect environmental variables for input to predictive habitat models. The study explores the effects of DEM error and the resultant uncertainty of results on typical steps in the modeling procedure for prediction of vegetation species presence/absence. Results indicate that all of these steps and results, including the statistical significance of environmental variables, shapes of species response curves in generalized additive models (GAMs), stepwise model selection, coefficients and standard errors for generalized linear models (GLMs), prediction accuracy (Cohen's kappa and AUC), and spatial extent of predictions, were greatly affected by this type of error. Error in the DEM can affect the reliability of interpretations of model results and level of accuracy in predictions, as well as the spatial extent of the predictions. We suggest that the sensitivity of DEM-derived environmental variables to error in the DEM should be considered before including them in the modeling processes.

  12. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  13. Strength of forelimb lateralization predicts motor errors in an insect

    PubMed Central

    Bell, Adrian T. A.

    2016-01-01

    Lateralized behaviours are widespread in both vertebrates and invertebrates, suggesting that lateralization is advantageous. Yet evidence demonstrating proximate or ultimate advantages remains scarce, particularly in invertebrates or in species with individual-level lateralization. Desert locusts (Schistocerca gregaria) are biased in the forelimb they use to perform targeted reaching across a gap. The forelimb and strength of this bias differed among individuals, indicative of individual-level lateralization. Here we show that strongly biased locusts perform better during gap-crossing, making fewer errors with their preferred forelimb. The number of targeting errors locusts make negatively correlates with the strength of forelimb lateralization. This provides evidence that stronger lateralization confers an advantage in terms of improved motor control in an invertebrate with individual-level lateralization. PMID:27651534

  14. Prediction Error Associated with the Perceptual Segmentation of Naturalistic Events

    ERIC Educational Resources Information Center

    Zacks, Jeffrey M.; Kurby, Christopher A.; Eisenberg, Michelle L.; Haroutunian, Nayiri

    2011-01-01

    Predicting the near future is important for survival and plays a central role in theories of perception, language processing, and learning. Prediction failures may be particularly important for initiating the updating of perceptual and memory systems and, thus, for the subjective experience of events. Here, we asked observers to make predictions…

  15. Aircraft noise-induced awakenings are more reasonably predicted from relative than from absolute sound exposure levels.

    PubMed

    Fidell, Sanford; Tabachnick, Barbara; Mestre, Vincent; Fidell, Linda

    2013-11-01

    Assessment of aircraft noise-induced sleep disturbance is problematic for several reasons. Current assessment methods are based on sparse evidence and limited understandings; predictions of awakening prevalence rates based on indoor absolute sound exposure levels (SELs) fail to account for appreciable amounts of variance in dosage-response relationships and are not freely generalizable from airport to airport; and predicted awakening rates do not differ significantly from zero over a wide range of SELs. Even in conjunction with additional predictors, such as time of night and assumed individual differences in "sensitivity to awakening," nominally SEL-based predictions of awakening rates remain of limited utility and are easily misapplied and misinterpreted. Probabilities of awakening are more closely related to SELs scaled in units of standard deviates of local distributions of aircraft SELs, than to absolute sound levels. Self-selection of residential populations for tolerance of nighttime noise and habituation to airport noise environments offer more parsimonious and useful explanations for differences in awakening rates at disparate airports than assumed individual differences in sensitivity to awakening.

  16. Prediction error associated with the perceptual segmentation of naturalistic events.

    PubMed

    Zacks, Jeffrey M; Kurby, Christopher A; Eisenberg, Michelle L; Haroutunian, Nayiri

    2011-12-01

    Predicting the near future is important for survival and plays a central role in theories of perception, language processing, and learning. Prediction failures may be particularly important for initiating the updating of perceptual and memory systems and, thus, for the subjective experience of events. Here, we asked observers to make predictions about what would happen 5 sec later in a movie of an everyday activity. Those points where prediction was more difficult corresponded with subjective boundaries in the stream of experience. At points of unpredictability, midbrain and striatal regions associated with the phasic release of the neurotransmitter dopamine transiently increased in activity. This activity could provide a global updating signal, cuing other brain systems that a significant new event has begun.

  17. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  18. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    DTIC Science & Technology

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  19. Putamen Activation Represents an Intrinsic Positive Prediction Error Signal for Visual Search in Repeated Configurations.

    PubMed

    Sommer, Susanne; Pollmann, Stefan

    2016-01-01

    We investigated fMRI responses to visual search targets appearing at locations that were predicted by the search context. Based on previous work in visual category learning we expected an intrinsic reward prediction error signal in the putamen whenever the target appeared at a location that was predicted with some degree of uncertainty. Comparing target appearance at locations predicted with 50% probability to either locations predicted with 100% probability or unpredicted locations, increased activation was observed in left posterior putamen and adjacent left posterior insula. Thus, our hypothesis of an intrinsic prediction error-like signal was confirmed. This extends the observation of intrinsic prediction error-like signals, driven by intrinsic rather than extrinsic reward, to memory-driven visual search.

  20. Putamen Activation Represents an Intrinsic Positive Prediction Error Signal for Visual Search in Repeated Configurations

    PubMed Central

    Sommer, Susanne; Pollmann, Stefan

    2016-01-01

    We investigated fMRI responses to visual search targets appearing at locations that were predicted by the search context. Based on previous work in visual category learning we expected an intrinsic reward prediction error signal in the putamen whenever the target appeared at a location that was predicted with some degree of uncertainty. Comparing target appearance at locations predicted with 50% probability to either locations predicted with 100% probability or unpredicted locations, increased activation was observed in left posterior putamen and adjacent left posterior insula. Thus, our hypothesis of an intrinsic prediction error-like signal was confirmed. This extends the observation of intrinsic prediction error-like signals, driven by intrinsic rather than extrinsic reward, to memory-driven visual search. PMID:27867436

  1. Comparison of transmission error predictions with noise measurements for several spur and helical gears

    NASA Astrophysics Data System (ADS)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-06-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  2. Comparison of Transmission Error Predictions with Noise Measurements for Several Spur and Helical Gears

    NASA Technical Reports Server (NTRS)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-01-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  3. Competition between learned reward and error outcome predictions in anterior cingulate cortex

    PubMed Central

    Alexander, William H.; Brown, Joshua W.

    2009-01-01

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an incentive change signal task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. PMID:19961940

  4. Predicting Absolute Risk of Type 2 Diabetes Using Age and Waist Circumference Values in an Aboriginal Australian Community

    PubMed Central

    2015-01-01

    Objectives To predict in an Australian Aboriginal community, the 10-year absolute risk of type 2 diabetes associated with waist circumference and age on baseline examination. Method A sample of 803 diabetes-free adults (82.3% of the age-eligible population) from baseline data of participants collected from 1992 to 1998 were followed-up for up to 20 years till 2012. The Cox-proportional hazard model was used to estimate the effects of waist circumference and other risk factors, including age, smoking and alcohol consumption status, of males and females on prediction of type 2 diabetes, identified through subsequent hospitalisation data during the follow-up period. The Weibull regression model was used to calculate the absolute risk estimates of type 2 diabetes with waist circumference and age as predictors. Results Of 803 participants, 110 were recorded as having developed type 2 diabetes, in subsequent hospitalizations over a follow-up of 12633.4 person-years. Waist circumference was strongly associated with subsequent diagnosis of type 2 diabetes with P<0.0001 for both genders and remained statistically significant after adjusting for confounding factors. Hazard ratios of type 2 diabetes associated with 1 standard deviation increase in waist circumference were 1.7 (95%CI 1.3 to 2.2) for males and 2.1 (95%CI 1.7 to 2.6) for females. At 45 years of age with baseline waist circumference of 100 cm, a male had an absolute diabetic risk of 10.9%, while a female had a 14.3% risk of the disease. Conclusions The constructed model predicts the 10-year absolute diabetes risk in an Aboriginal Australian community. It is simple and easily understood and will help identify individuals at risk of diabetes in relation to waist circumference values. Our findings on the relationship between waist circumference and diabetes on gender will be useful for clinical consultation, public health education and establishing WC cut-off points for Aboriginal Australians. PMID:25876058

  5. The effect of retrospective sampling on estimates of prediction error for multifactor dimensionality reduction.

    PubMed

    Winham, Stacey J; Motsinger-Reif, Alison A

    2011-01-01

    The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates.

  6. Dynamical predictability in a simple general circulation model - Average error growth

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried D.; Suarez, Max

    1989-01-01

    Average predictability and error growth in a simple realistic two-level general circulation model (GCM) were investigated using a series of Monte Carlo experiments for fixed external forcing (perpetual winter in the Northern Hemisphere). It was found that, for realistic initial errors, the dependence of the limit of dynamic predictability on total wavenumber was similar to that found for the ECMWF model for the 1980/1981 winter conditions, with the lowest wavenumbers showing significant skill for forecast ranges of more than 1 month. On the other hand, for very small amplitude errors distributed according to the climate spectrum, the total error growth was superexponential, reaching a maximum growth rate (2-day doubling time) in about 1 week. A simple empirical model of error variance, which involved two broad wavenumber bands and incorporating a 3/2 power saturation term, was found to provide an excellent fit to the GCM error growth behavior.

  7. The recent absolute total np and pp cross section determinations: quality of data description and prediction of experimental observables

    SciTech Connect

    Laptev, Alexander B; Haight, Robert C; Arndt, Richard A; Briscoe, William J; Paris, Mark W; Strakovsky, Igor I; Workman, Ron L

    2010-01-01

    The absolute total cross sections for np and pp scattering below 1000 MeV are determined based on partial-wave analyses (PWAs) of nucleon-nucleon scattering data. These cross sections are compared with the most recent ENDF/B-VII.0 and JENDL-3.3 data files, and the Nijmegen PWA. Systematic deviations from the ENDF/B-VII.0 and JENDL-3.3 evaluations are found to exist in the low-energy region. Comparison of the np evaluation with the result of most recent np total and differential cross section measurements will be discussed. Results of those measurements were not used in the evaluation database. A comparison was done to check a quality of evaluation and its capabilities to predict experimental observables. Excellent agreement was found between the new experimental data and our PWA predictions.

  8. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  9. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  10. Artificial neural network implementation of a near-ideal error prediction controller

    NASA Technical Reports Server (NTRS)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error

  11. Absolute Lymphocyte Count (ALC) after Induction Treatment Predicts Survival of Pediatric Patients with Acute Lymphoblastic Leukemia.

    PubMed

    Farkas, Tamas; Müller, Judit; Erdelyi, Daniel J; Csoka, Monika; Kovacs, Gabor T

    2017-01-30

    Absolute Lymphocyte Count (ALC) has been recently established as a prognostic factor of survival in pediatric Acute Lymphoblastic Leukemia (ALL). A retrospective analysis of 132 patients treated according the BFM - ALLIC 2002 protocol was performed in a single institution. A possible association between ALC values and Overall Survival (OS) or Event-Free Survival (EFS) was evaluated at multiple time points during induction chemotherapy. ALC higher than 350 cells/μL measured on the 33th day of induction was associated with better Overall- and Event-Free Survival in both Kaplan-Meier (OS 88.6% vs. 40%; p < 0.001 / EFS 81.6% vs. 30%; p < 0.001) and Cox regression (OS HR 8.77 (3.31-23.28); p < 0.001) and EFS HR 6.61 (2.79-15.63); p < 0.001) analyses. There was no association between survival and measured ALC values from earlier time points (day of diagnosis, days 8 and 15) of induction therapy. Patients with low ALC values tend to have higher risk (MR or HR groups) and a higher age at diagnosis (>10 years). With help of day 33 ALC values of 350 cells/μL cutoff it was possible to refine day 33 flow cytometry (FC) Minimal Residual Disease (MRD) results within the negative cohort: higher ALC values were significantly associated with better survival. ALC on day 33 (350 cells/μL) remained prognostic for OS and EFS in multivariate analysis after adjusting it for age, cytogenetics, immunophenotype and FC MRD of induction day 33. According to these findings ALC on day 33 of induction is a strong predictor of survival in pediatric ALL.

  12. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  13. Why the brain talks to itself: sources of error in emotional prediction.

    PubMed

    Gilbert, Daniel T; Wilson, Timothy D

    2009-05-12

    People typically choose pleasure over pain. But how do they know which of these their choices will entail? The brain generates mental simulations (previews) of future events, which produce affective reactions (premotions), which are then used as a basis for forecasts (predictions) about the future event's emotional consequences. Research shows that this process leads to systematic errors of prediction. We review evidence indicating that these errors can be traced to five sources.

  14. Why the brain talks to itself: sources of error in emotional prediction

    PubMed Central

    Gilbert, Daniel T.; Wilson, Timothy D.

    2009-01-01

    People typically choose pleasure over pain. But how do they know which of these their choices will entail? The brain generates mental simulations (previews) of future events, which produce affective reactions (premotions), which are then used as a basis for forecasts (predictions) about the future event's emotional consequences. Research shows that this process leads to systematic errors of prediction. We review evidence indicating that these errors can be traced to five sources. PMID:19528015

  15. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning

    PubMed Central

    Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.

    2017-01-01

    Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359

  16. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning.

    PubMed

    Nasser, Helen M; Calu, Donna J; Schoenbaum, Geoffrey; Sharpe, Melissa J

    2017-01-01

    Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto's (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value.

  17. Prediction error in reinforcement learning: a meta-analysis of neuroimaging studies.

    PubMed

    Garrison, Jane; Erdeniz, Burak; Done, John

    2013-08-01

    Activation likelihood estimation (ALE) meta-analyses were used to examine the neural correlates of prediction error in reinforcement learning. The findings are interpreted in the light of current computational models of learning and action selection. In this context, particular consideration is given to the comparison of activation patterns from studies using instrumental and Pavlovian conditioning, and where reinforcement involved rewarding or punishing feedback. The striatum was the key brain area encoding for prediction error, with activity encompassing dorsal and ventral regions for instrumental and Pavlovian reinforcement alike, a finding which challenges the functional separation of the striatum into a dorsal 'actor' and a ventral 'critic'. Prediction error activity was further observed in diverse areas of predominantly anterior cerebral cortex including medial prefrontal cortex and anterior cingulate cortex. Distinct patterns of prediction error activity were found for studies using rewarding and aversive reinforcers; reward prediction errors were observed primarily in the striatum while aversive prediction errors were found more widely including insula and habenula.

  18. Dopamine prediction error responses integrate subjective value from different reward dimensions.

    PubMed

    Lak, Armin; Stauffer, William R; Schultz, Wolfram

    2014-02-11

    Prediction error signals enable us to learn through experience. These experiences include economic choices between different rewards that vary along multiple dimensions. Therefore, an ideal way to reinforce economic choice is to encode a prediction error that reflects the subjective value integrated across these reward dimensions. Previous studies demonstrated that dopamine prediction error responses reflect the value of singular reward attributes that include magnitude, probability, and delay. Obviously, preferences between rewards that vary along one dimension are completely determined by the manipulated variable. However, it is unknown whether dopamine prediction error responses reflect the subjective value integrated from different reward dimensions. Here, we measured the preferences between rewards that varied along multiple dimensions, and as such could not be ranked according to objective metrics. Monkeys chose between rewards that differed in amount, risk, and type. Because their choices were complete and transitive, the monkeys chose "as if" they integrated different rewards and attributes into a common scale of value. The prediction error responses of single dopamine neurons reflected the integrated subjective value inferred from the choices, rather than the singular reward attributes. Specifically, amount, risk, and reward type modulated dopamine responses exactly to the extent that they influenced economic choices, even when rewards were vastly different, such as liquid and food. This prediction error response could provide a direct updating signal for economic values.

  19. Absolute lymphocyte count recovery after allogeneic hematopoietic stem cell transplantation predicts clinical outcome.

    PubMed

    Kim, Haesook T; Armand, Philippe; Frederick, David; Andler, Emily; Cutler, Corey; Koreth, John; Alyea, Edwin P; Antin, Joseph H; Soiffer, Robert J; Ritz, Jerome; Ho, Vincent T

    2015-05-01

    Immune reconstitution is critical for clinical outcome after allogeneic hematopoietic stem cell transplantation (HSCT). To determine the impact of absolute lymphocyte count (ALC) recovery on clinical outcomes, we conducted a retrospective study of 1109 adult patients who underwent a first allogeneic HSCT from 2003 through 2009, excluding patients who died or relapsed before day 30. The median age was 51 years (range, 18 to 74) with 52% undergoing reduced-intensity conditioning and 48% undergoing myeloablative conditioning HSCT with T cell-replete peripheral blood stem cells (93.7%) or marrow (6.4%) grafts. The median follow-up time was 6 years. To determine the threshold value of ALC for survival, the entire cohort was randomly split into a training set and a validation set in a 1:1 ratio, and then a restricted cubic spline smoothing method was applied to obtain relative hazard estimates of the relationship between ALC at 1 month and log hazard of progression-free survival (PFS). Based on this approach, ALC was categorized as ≤.2 × 10(9) cells/L (low) or >.2 × 10(9) cells/L. For patients with low ALC at 1, 2, or 3 months after HSCT, the overall survival (OS) (P ≤ .0001) and PFS (P ≤ .0002) were significantly lower and nonrelapse mortality (NRM) (P ≤ .002) was significantly higher compared with patients with ALC > .2 × 10(9) cells/L at each time point. When patients who had low ALC at 1, 2, or 3 months after HSCT were grouped together and compared, their outcomes were inferior to those of patients who had ALC > .2 × 10(9) cells/L at 1, 2, and 3 months after HSCT: the 5-year OS for patients with low ALC was 28% versus 46% for patients with ALC > .2 × 10(9) cells/L, P < .0001; the 5-year PFS was 21% versus 39%, P < .0001, respectively and 5-year NRM was 40% versus 18%, P < .0001, respectively. This result remained consistent when other prognostic factors, including occurrence of grade II to IV acute graft-versus-host disease (GVHD), were adjusted for in

  20. A Bayesian approach to improved calibration and prediction of groundwater models with structural error

    NASA Astrophysics Data System (ADS)

    Xu, Tianfang; Valocchi, Albert J.

    2015-11-01

    Numerical groundwater flow and solute transport models are usually subject to model structural error due to simplification and/or misrepresentation of the real system, which raises questions regarding the suitability of conventional least squares regression-based (LSR) calibration. We present a new framework that explicitly describes the model structural error statistically in an inductive, data-driven way. We adopt a fully Bayesian approach that integrates Gaussian process error models into the calibration, prediction, and uncertainty analysis of groundwater flow models. We test the usefulness of the fully Bayesian approach with a synthetic case study of the impact of pumping on surface-ground water interaction. We illustrate through this example that the Bayesian parameter posterior distributions differ significantly from parameters estimated by conventional LSR, which does not account for model structural error. For the latter method, parameter compensation for model structural error leads to biased, overconfident prediction under changing pumping condition. In contrast, integrating Gaussian process error models significantly reduces predictive bias and leads to prediction intervals that are more consistent with validation data. Finally, we carry out a generalized LSR recalibration step to assimilate the Bayesian prediction while preserving mass conservation and other physical constraints, using a full error covariance matrix obtained from Bayesian results. It is found that the recalibrated model achieved lower predictive bias compared to the model calibrated using conventional LSR. The results highlight the importance of explicit treatment of model structural error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification.

  1. The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors

    SciTech Connect

    Duffey, Romney B.; Saull, John W.

    2006-07-01

    Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum

  2. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  3. Fronto-temporal white matter connectivity predicts reversal learning errors.

    PubMed

    Alm, Kylie H; Rolheiser, Tyler; Mohamed, Feroze B; Olson, Ingrid R

    2015-01-01

    Each day, we make hundreds of decisions. In some instances, these decisions are guided by our innate needs; in other instances they are guided by memory. Probabilistic reversal learning tasks exemplify the close relationship between decision making and memory, as subjects are exposed to repeated pairings of a stimulus choice with a reward or punishment outcome. After stimulus-outcome associations have been learned, the associated reward contingencies are reversed, and participants are not immediately aware of this reversal. Individual differences in the tendency to choose the previously rewarded stimulus reveal differences in the tendency to make poorly considered, inflexible choices. Lesion studies have strongly linked reversal learning performance to the functioning of the orbitofrontal cortex, the hippocampus, and in some instances, the amygdala. Here, we asked whether individual differences in the microstructure of the uncinate fasciculus, a white matter tract that connects anterior and medial temporal lobe regions to the orbitofrontal cortex, predict reversal learning performance. Diffusion tensor imaging and behavioral paradigms were used to examine this relationship in 33 healthy young adults. The results of tractography revealed a significant negative relationship between reversal learning performance and uncinate axial diffusivity, but no such relationship was demonstrated in a control tract, the inferior longitudinal fasciculus. Our findings suggest that the uncinate might serve to integrate associations stored in the anterior and medial temporal lobes with expectations about expected value based on feedback history, computed in the orbitofrontal cortex.

  4. Fronto-temporal white matter connectivity predicts reversal learning errors

    PubMed Central

    Alm, Kylie H.; Rolheiser, Tyler; Mohamed, Feroze B.; Olson, Ingrid R.

    2015-01-01

    Each day, we make hundreds of decisions. In some instances, these decisions are guided by our innate needs; in other instances they are guided by memory. Probabilistic reversal learning tasks exemplify the close relationship between decision making and memory, as subjects are exposed to repeated pairings of a stimulus choice with a reward or punishment outcome. After stimulus–outcome associations have been learned, the associated reward contingencies are reversed, and participants are not immediately aware of this reversal. Individual differences in the tendency to choose the previously rewarded stimulus reveal differences in the tendency to make poorly considered, inflexible choices. Lesion studies have strongly linked reversal learning performance to the functioning of the orbitofrontal cortex, the hippocampus, and in some instances, the amygdala. Here, we asked whether individual differences in the microstructure of the uncinate fasciculus, a white matter tract that connects anterior and medial temporal lobe regions to the orbitofrontal cortex, predict reversal learning performance. Diffusion tensor imaging and behavioral paradigms were used to examine this relationship in 33 healthy young adults. The results of tractography revealed a significant negative relationship between reversal learning performance and uncinate axial diffusivity, but no such relationship was demonstrated in a control tract, the inferior longitudinal fasciculus. Our findings suggest that the uncinate might serve to integrate associations stored in the anterior and medial temporal lobes with expectations about expected value based on feedback history, computed in the orbitofrontal cortex. PMID:26150776

  5. Glutamatergic model psychoses: prediction error, learning, and inference.

    PubMed

    Corlett, Philip R; Honey, Garry D; Krystal, John H; Fletcher, Paul C

    2011-01-01

    Modulating glutamatergic neurotransmission induces alterations in conscious experience that mimic the symptoms of early psychotic illness. We review studies that use intravenous administration of ketamine, focusing on interindividual variability in the profundity of the ketamine experience. We will consider this individual variability within a hypothetical model of brain and cognitive function centered upon learning and inference. Within this model, the brains, neural systems, and even single neurons specify expectations about their inputs and responding to violations of those expectations with new learning that renders future inputs more predictable. We argue that ketamine temporarily deranges this ability by perturbing both the ways in which prior expectations are specified and the ways in which expectancy violations are signaled. We suggest that the former effect is predominantly mediated by NMDA blockade and the latter by augmented and inappropriate feedforward glutamatergic signaling. We suggest that the observed interindividual variability emerges from individual differences in neural circuits that normally underpin the learning and inference processes described. The exact source for that variability is uncertain, although it is likely to arise not only from genetic variation but also from subjects' previous experiences and prior learning. Furthermore, we argue that chronic, unlike acute, NMDA blockade alters the specification of expectancies more profoundly and permanently. Scrutinizing individual differences in the effects of acute and chronic ketamine administration in the context of the Bayesian brain model may generate new insights about the symptoms of psychosis; their underlying cognitive processes and neurocircuitry.

  6. Glutamatergic Model Psychoses: Prediction Error, Learning, and Inference

    PubMed Central

    Corlett, Philip R; Honey, Garry D; Krystal, John H; Fletcher, Paul C

    2011-01-01

    Modulating glutamatergic neurotransmission induces alterations in conscious experience that mimic the symptoms of early psychotic illness. We review studies that use intravenous administration of ketamine, focusing on interindividual variability in the profundity of the ketamine experience. We will consider this individual variability within a hypothetical model of brain and cognitive function centered upon learning and inference. Within this model, the brains, neural systems, and even single neurons specify expectations about their inputs and responding to violations of those expectations with new learning that renders future inputs more predictable. We argue that ketamine temporarily deranges this ability by perturbing both the ways in which prior expectations are specified and the ways in which expectancy violations are signaled. We suggest that the former effect is predominantly mediated by NMDA blockade and the latter by augmented and inappropriate feedforward glutamatergic signaling. We suggest that the observed interindividual variability emerges from individual differences in neural circuits that normally underpin the learning and inference processes described. The exact source for that variability is uncertain, although it is likely to arise not only from genetic variation but also from subjects' previous experiences and prior learning. Furthermore, we argue that chronic, unlike acute, NMDA blockade alters the specification of expectancies more profoundly and permanently. Scrutinizing individual differences in the effects of acute and chronic ketamine administration in the context of the Bayesian brain model may generate new insights about the symptoms of psychosis; their underlying cognitive processes and neurocircuitry. PMID:20861831

  7. Diagnostic value of IL-6, CRP, WBC, and absolute neutrophil count to predict serious bacterial infection in febrile infants.

    PubMed

    Zarkesh, Marjaneh; Sedaghat, Fatemeh; Heidarzadeh, Abtin; Tabrizi, Manizheh; Bolooki-Moghadam, Kobra; Ghesmati, Soheil

    2015-07-01

    Since clinical manifestations of most febrile infants younger than three months old are nonspecific, differentiation of Serious Bacterial Infection (SBI) from self-limiting viral illness is a significant challenge for pediatricians. This study was performed to assess the diagnostic value of white blood cell count (WBC), Absolute Neutrophil Count (ANC), Interleukin -6 (IL-6) and C-reactive protein (CRP) level to predict SBI in febrile infants younger than three months old who were hospitalized. This was a diagnostic test validation study. In this prospective study, 195 febrile infants admitted to 17 Shahrivar Hospital underwent a full sepsis workup including blood, urine, cerebrospinal fluid cultures and chest radiography. WBC count, ANC and CRP and Il-6 level were measured in all patients. Serum IL-6 concentration was measured by Enzyme-linked Immunosorbent Assay test. Then diagnostic, values of these tests for predicting SBI was compared with each other. Of total cases, 112 (57.4%) infants were male. SBI was diagnosed in 29 (14.9%) patients. The most common type of SBI was Urinary Tract Infection (UTI). Serum IL-6 (³20pg/dl) had sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of 79/1%, 91.6%,75.4%, 60.3%, respectively and for CRP (³ 10mg/l) values were 81.6%, 89.8%, 78.2%, and 52%,respectively. The predictive values of CRP and IL-6 were higher than WBC and ANC. IL-6 and CRP are more valid and better diagnostic markers for predicting SBI than WBC count and ANC. CRP level seems to be an accessible and cost-effective marker for early diagnosis of SBI. Since by no marker we can totally rule out SBI in febrile infants < three months of age, it is recommended to administer systemic antibiotics until culture results become available.

  8. Evaluation of parametric models by the prediction error in colorectal cancer survival analysis

    PubMed Central

    Baghestani, Ahmad Reza; Gohari, Mahmood Reza; Orooji, Arezoo; Pourhoseingholi, Mohamad Amin; Zali, Mohammad Reza

    2015-01-01

    Aim: The aim of this study is to determine the factors influencing predicted survival time for patients with colorectal cancer (CRC) using parametric models and select the best model by predicting error’s technique. Background: Survival models are statistical techniques to estimate or predict the overall time up to specific events. Prediction is important in medical science and the accuracy of prediction is determined by a measurement, generally based on loss functions, called prediction error. Patients and methods: A total of 600 colorectal cancer patients who admitted to the Cancer Registry Center of Gastroenterology and Liver Disease Research Center, Taleghani Hospital, Tehran, were followed at least for 5 years and have completed selected information for this study. Body Mass Index (BMI), Sex, family history of CRC, tumor site, stage of disease and histology of tumor included in the analysis. The survival time was compared by the Log-rank test and multivariate analysis was carried out using parametric models including Log normal, Weibull and Log logistic regression. For selecting the best model, the prediction error by apparent loss was used. Results: Log rank test showed a better survival for females, BMI more than 25, patients with early stage at diagnosis and patients with colon tumor site. Prediction error by apparent loss was estimated and indicated that Weibull model was the best one for multivariate analysis. BMI and Stage were independent prognostic factors, according to Weibull model. Conclusion: In this study, according to prediction error Weibull regression showed a better fit. Prediction error would be a criterion to select the best model with the ability to make predictions of prognostic factors in survival analysis. PMID:26328040

  9. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times.

    PubMed

    Cavanagh, James F

    2015-04-15

    Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making.

  10. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  11. Quantitative vapor-phase IR intensities and DFT computations to predict absolute IR spectra based on molecular structure: I. Alkanes

    NASA Astrophysics Data System (ADS)

    Williams, Stephen D.; Johnson, Timothy J.; Sharpe, Steven W.; Yavelak, Veronica; Oates, R. P.; Brauer, Carolyn S.

    2013-11-01

    Recently recorded quantitative IR spectra of a variety of gas-phase alkanes are shown to have integrated intensities in both the C3H stretching and C3H bending regions that depend linearly on the molecular size, i.e. the number of C3H bonds. This result is well predicted from CH4 to C15H32 by density functional theory (DFT) computations of IR spectra using Becke's three parameter functional (B3LYP/6-31+G(d,p)). Using the experimental data, a simple model predicting the absolute IR band intensities of alkanes based only on structural formula is proposed: For the C3H stretching band envelope centered near 2930 cm-1 this is given by (km/mol) CH_str=(34±1)×CH-(41±23) where CH is number of C3H bonds in the alkane. The linearity is explained in terms of coordinated motion of methylene groups rather than the summed intensities of autonomous -CH2-units. The effect of alkyl chain length on the intensity of a C3H bending mode is explored and interpreted in terms of conformer distribution. The relative intensity contribution of a methyl mode compared to the total C3H stretch intensity is shown to be linear in the number of methyl groups in the alkane, and can be used to predict quantitative spectra a priori based on structure alone.

  12. Optimal Threshold and Time of Absolute Lymphocyte Count Assessment for Outcome Prediction after Bone Marrow Transplantation.

    PubMed

    Bayraktar, Ulas D; Milton, Denái R; Guindani, Michele; Rondon, Gabriela; Chen, Julianne; Al-Atrash, Gheath; Rezvani, Katayoun; Champlin, Richard; Ciurea, Stefan O

    2016-03-01

    The recovery pace of absolute lymphocyte count (ALC) is prognostic after hematopoietic stem cell transplantation. Previous studies have evaluated a wide range of ALC cutoffs and time points for predicting outcomes. We aimed to determine the optimal ALC value for outcome prediction after bone marrow transplantation (BMT). A total of 518 patients who underwent BMT for acute leukemia or myelodysplastic syndrome between 1999 and 2010 were divided into a training set and a test set to assess the prognostic value of ALC on days 30, 60, 90, 120, 180, as well as the first post-transplantation day of an ALC of 100, 200, 300, 400, 500, and 1000/μL. In the training set, the best predictor of overall survival (OS), relapse-free survival (RFS), and nonrelapse mortality (NRM) was ALC on day 60. In the entire patient cohort, multivariable analyses demonstrated significantly better OS, RFS, and NRM and lower incidence of graft-versus-host disease (GVHD) in patients with an ALC >300/μL on day 60 post-BMT, both including and excluding patients who developed GVHD before day 60. Among the patient-, disease-, and transplant-related factors assessed, only busulfan-based conditioning was significantly associated with higher ALC values on day 60 in both cohorts. The optimal ALC cutoff for predicting outcomes after BMT is 300/μL on day 60 post-transplantation.

  13. Quantitative Vapor-phase IR Intensities and DFT Computations to Predict Absolute IR Spectra based on Molecular Structure: I. Alkanes

    SciTech Connect

    Williams, Stephen D.; Johnson, Timothy J.; Sharpe, Steven W.; Yavelak, Veronica; Oats, R. P.; Brauer, Carolyn S.

    2013-11-13

    Recently recorded quantitative IR spectra of a variety of gas-phase alkanes are shown to have integrated intensities in both the C-H stretching and C-H bending regions that depend linearly on the molecular size, i.e. the number of C-H bonds. This result is well predicted from CH4 to C15H32 by DFT computations of IR spectra at the B3LYP/6-31+G(d,p) level of DFT theory. A simple model predicting the absolute IR band intensities of alkanes based only on structural formula is proposed: For the C-H stretching band near 2930 cm-1 this is given by (in km/mol): CH¬_str = (34±3)*CH – (41±60) where CH is number of C-H bonds in the alkane. The linearity is explained in terms of coordinated motion of methylene groups rather than the summed intensities of autonomous -CH2- units. The effect of alkyl chain length on the intensity of a C-H bending mode is explored and interpreted in terms of conformer distribution. The relative intensity contribution of a methyl mode compared to the total C-H stretch intensity is shown to be linear in the number of terminal methyl groups in the alkane, and can be used to predict quantitative spectra a priori based on structure alone.

  14. Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.

    PubMed

    Limongi, Roberto; Silva, Angélica M

    2016-11-01

    The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.

  15. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    NASA Astrophysics Data System (ADS)

    Larson, Sarah M.; Kirtman, Ben P.

    2016-07-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  16. Quantifying the Effect of Lidar Turbulence Error on Wind Power Prediction

    SciTech Connect

    Newman, Jennifer F.; Clifton, Andrew

    2016-01-01

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST

  17. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  18. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    NASA Astrophysics Data System (ADS)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  19. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    PubMed

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward

  20. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  1. Midbrain dopamine neurons compute inferred and cached value prediction errors in a common framework

    PubMed Central

    Sadacca, Brian F; Jones, Joshua L; Schoenbaum, Geoffrey

    2016-01-01

    Midbrain dopamine neurons have been proposed to signal reward prediction errors as defined in temporal difference (TD) learning algorithms. While these models have been extremely powerful in interpreting dopamine activity, they typically do not use value derived through inference in computing errors. This is important because much real world behavior – and thus many opportunities for error-driven learning – is based on such predictions. Here, we show that error-signaling rat dopamine neurons respond to the inferred, model-based value of cues that have not been paired with reward and do so in the same framework as they track the putative cached value of cues previously paired with reward. This suggests that dopamine neurons access a wider variety of information than contemplated by standard TD models and that, while their firing conforms to predictions of TD models in some cases, they may not be restricted to signaling errors from TD predictions. DOI: http://dx.doi.org/10.7554/eLife.13665.001 PMID:26949249

  2. Cognitive strategies regulate fictive, but not reward prediction error signals in a sequential investment task.

    PubMed

    Gu, Xiaosi; Kirk, Ulrich; Lohrenz, Terry M; Montague, P Read

    2014-08-01

    Computational models of reward processing suggest that foregone or fictive outcomes serve as important information sources for learning and augment those generated by experienced rewards (e.g. reward prediction errors). An outstanding question is how these learning signals interact with top-down cognitive influences, such as cognitive reappraisal strategies. Using a sequential investment task and functional magnetic resonance imaging, we show that the reappraisal strategy selectively attenuates the influence of fictive, but not reward prediction error signals on investment behavior; such behavioral effect is accompanied by changes in neural activity and connectivity in the anterior insular cortex, a brain region thought to integrate subjective feelings with high-order cognition. Furthermore, individuals differ in the extent to which their behaviors are driven by fictive errors versus reward prediction errors, and the reappraisal strategy interacts with such individual differences; a finding also accompanied by distinct underlying neural mechanisms. These findings suggest that the variable interaction of cognitive strategies with two important classes of computational learning signals (fictive, reward prediction error) represent one contributing substrate for the variable capacity of individuals to control their behavior based on foregone rewards. These findings also expose important possibilities for understanding the lack of control in addiction based on possibly foregone rewarding outcomes.

  3. Predicting Human Error in Air Traffic Control Decision Support Tools and Free Flight Concepts

    NASA Technical Reports Server (NTRS)

    Mogford, Richard; Kopardekar, Parimal

    2001-01-01

    The document is a set of briefing slides summarizing the work the Advanced Air Transportation Technologies (AATT) Project is doing on predicting air traffic controller and airline pilot human error when using new decision support software tools and when involved in testing new air traffic control concepts. Previous work in this area is reviewed as well as research being done jointly with the FAA. Plans for error prediction work in the AATT Project are discussed. The audience is human factors researchers and aviation psychologists from government and industry.

  4. Predictive error detection in pianists: a combined ERP and motion capture study

    PubMed Central

    Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari

    2013-01-01

    Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID

  5. Prediction Errors but Not Sharpened Signals Simulate Multivoxel fMRI Patterns during Speech Perception

    PubMed Central

    Davis, Matthew H.

    2016-01-01

    Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior

  6. Neural Activities Underlying the Feedback Express Salience Prediction Errors for Appetitive and Aversive Stimuli

    PubMed Central

    Gu, Yan; Hu, Xueping; Pan, Weigang; Yang, Chun; Wang, Lijun; Li, Yiyuan; Chen, Antao

    2016-01-01

    Feedback information is essential for us to adapt appropriately to the environment. The feedback-related negativity (FRN), a frontocentral negative deflection after the delivery of feedback, has been found to be larger for outcomes that are worse than expected, and it reflects a reward prediction error derived from the midbrain dopaminergic projections to the anterior cingulate cortex (ACC), as stated in reinforcement learning theory. In contrast, the prediction of response-outcome (PRO) model claims that the neural activity in the mediofrontal cortex (mPFC), especially the ACC, is sensitive to the violation of expectancy, irrespective of the valence of feedback. Additionally, increasing evidence has demonstrated significant activities in the striatum, anterior insula and occipital lobe for unexpected outcomes independently of their valence. Thus, the neural mechanism of the feedback remains under dispute. Here, we investigated the feedback with monetary reward and electrical pain shock in one task via functional magnetic resonance imaging. The results revealed significant prediction-error-related activities in the bilateral fusiform gyrus, right middle frontal gyrus and left cingulate gyrus for both money and pain. This implies that some regions underlying the feedback may signal a salience prediction error rather than a reward prediction error. PMID:27694920

  7. Standard Deviation and Intra Prediction Mode Based Adaptive Spatial Error Concealment (SEC) in H.264/AVC

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Wang, Lei; Ikenaga, Takeshi; Goto, Satoshi

    Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Therefore an error concealment scheme is applied at the video receiver side to mask the damaged video. Considering there are 3 types of MBs (Macro Blocks) in natural video frame, i. e., Textural MB, Edged MB, and Smooth MB, this paper proposes an adaptive spatial error concealment which can choose 3 different methods for these 3 different MBs. For criteria of choosing appropriate method, 2 factors are taken into consideration. Firstly, standard deviation of our proposed edge statistical model is exploited. Secondly, some new features of latest video compression standard H.264/AVC, i. e., intra prediction mode is also considered for criterion formulation. Compared with previous works, which are only based on deterministic measurement, proposed method achieves the best image recovery. Subjective and objective image quality evaluations in experiments confirmed this.

  8. Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans.

    PubMed

    Pessiglione, Mathias; Seymour, Ben; Flandin, Guillaume; Dolan, Raymond J; Frith, Chris D

    2006-08-31

    Theories of instrumental learning are centred on understanding how success and failure are used to improve future decisions. These theories highlight a central role for reward prediction errors in updating the values associated with available actions. In animals, substantial evidence indicates that the neurotransmitter dopamine might have a key function in this type of learning, through its ability to modulate cortico-striatal synaptic efficacy. However, no direct evidence links dopamine, striatal activity and behavioural choice in humans. Here we show that, during instrumental learning, the magnitude of reward prediction error expressed in the striatum is modulated by the administration of drugs enhancing (3,4-dihydroxy-L-phenylalanine; L-DOPA) or reducing (haloperidol) dopaminergic function. Accordingly, subjects treated with L-DOPA have a greater propensity to choose the most rewarding action relative to subjects treated with haloperidol. Furthermore, incorporating the magnitude of the prediction errors into a standard action-value learning algorithm accurately reproduced subjects' behavioural choices under the different drug conditions. We conclude that dopamine-dependent modulation of striatal activity can account for how the human brain uses reward prediction errors to improve future decisions.

  9. Prediction Error Demarcates the Transition from Retrieval, to Reconsolidation, to New Learning

    ERIC Educational Resources Information Center

    Sevenster, Dieuwke; Beckers, Tom; Kindt, Merel

    2014-01-01

    Although disrupting reconsolidation is promising in targeting emotional memories, the conditions under which memory becomes labile are still unclear. The current study showed that post-retrieval changes in expectancy as an index for prediction error may serve as a read-out for the underlying processes engaged by memory reactivation. Minor…

  10. The effect of prediction error correlation on optimal sensor placement in structural dynamics

    NASA Astrophysics Data System (ADS)

    Papadimitriou, Costas; Lombaert, Geert

    2012-04-01

    The problem of estimating the optimal sensor locations for parameter estimation in structural dynamics is re-visited. The effect of spatially correlated prediction errors on the optimal sensor placement is investigated. The information entropy is used as a performance measure of the sensor configuration. The optimal sensor location is formulated as an optimization problem involving discrete-valued variables, which is solved using computationally efficient sequential sensor placement algorithms. Asymptotic estimates for the information entropy are used to develop useful properties that provide insight into the dependence of the information entropy on the number and location of sensors. A theoretical analysis shows that the spatial correlation length of the prediction errors controls the minimum distance between the sensors and should be taken into account when designing optimal sensor locations with potential sensor distances up to the order of the characteristic length of the dynamic problem considered. Implementation issues for modal identification and structural-related model parameter estimation are addressed. Theoretical and computational developments are illustrated by designing the optimal sensor configurations for a continuous beam model, a discrete chain-like stiffness-mass model and a finite element model of a footbridge in Wetteren (Belgium). Results point out the crucial effect the spatial correlation of the prediction errors have on the design of optimal sensor locations for structural dynamics applications, revealing simultaneously potential inadequacies of spatially uncorrelated prediction errors models.

  11. EEG Error Prediction as a Solution for Combining the Advantages of Retrieval Practice and Errorless Learning

    PubMed Central

    Riley, Ellyn A.; McFarland, Dennis J.

    2017-01-01

    Given the frequency of naming errors in aphasia, a common aim of speech and language rehabilitation is the improvement of naming. Based on evidence of significant word recall improvements in patients with memory impairments, errorless learning methods have been successfully applied to naming therapy in aphasia; however, other evidence suggests that although errorless learning can lead to better performance during treatment sessions, retrieval practice may be the key to lasting improvements. Task performance may vary with brain state (e.g., level of arousal, degree of task focus), and changes in brain state can be detected using EEG. With the ultimate goal of designing a system that monitors patient brain state in real time during therapy, we sought to determine whether errors could be predicted using spectral features obtained from an analysis of EEG. Thus, this study aimed to investigate the use of individual EEG responses to predict error production in aphasia. Eight participants with aphasia each completed 900 object-naming trials across three sessions while EEG was recorded and response accuracy scored for each trial. Analysis of the EEG response for seven of the eight participants showed significant correlations between EEG features and response accuracy (correct vs. incorrect) and error correction (correct, self-corrected, incorrect). Furthermore, upon combining the training data for the first two sessions, the model generalized to predict accuracy for performance in the third session for seven participants when accuracy was used as a predictor, and for five participants when error correction category was used as a predictor. With such ability to predict errors during therapy, it may be possible to use this information to intervene with errorless learning strategies only when necessary, thereby allowing patients to benefit from both the high within-session success of errorless learning as well as the longer-term improvements associated with retrieval practice.

  12. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines.

    PubMed

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D; Szpiro, Adam A

    2016-11-01

    Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals.

  13. Working Memory Capacity Predicts Selection and Identification Errors in Visual Search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2016-11-17

    As public safety relies on the ability of professionals, such as radiologists and baggage screeners, to detect rare targets, it could be useful to identify predictors of visual search performance. Schwark, Sandry, and Dolgov found that working memory capacity (WMC) predicts hit rate and reaction time in low prevalence searches. This link was attributed to higher WMC individuals exhibiting a higher quitting threshold and increasing the probability of finding the target before terminating search in low prevalence search. These conclusions were limited based on the methods; without eye tracking, the researchers could not differentiate between an increase in accuracy due to fewer identification errors (failing to identify a fixated target), selection errors (failing to fixate a target), or a combination of both. Here, we measure WMC and correlate it with reaction time and accuracy in a visual search task. We replicate the finding that WMC predicts reaction time and hit rate. However, our analysis shows that it does so through both a reduction in selection and identification errors. The correlation between WMC and selection errors is attributable to increased quitting thresholds in those with high WMC. The correlation between WMC and identification errors is less clear, though potentially attributable to increased item inspection times in those with higher WMC. In addition, unlike Schwark and coworkers, we find that these WMC effects are fairly consistent across prevalence rates rather than being specific to low-prevalence searches.

  14. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    SciTech Connect

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  15. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    SciTech Connect

    Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta; Hudson, Kathy; Tourassi, Georgia

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  16. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  17. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  18. Effects of optimal initial errors on predicting the seasonal reduction of the upstream Kuroshio transport

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Wang, Qiang; Mu, Mu; Liang, Peng

    2016-10-01

    With the Regional Ocean Modeling System (ROMS), we realistically simulated the transport variations of the upstream Kuroshio (referring to the Kuroshio from its origin to the south of Taiwan), particularly for the seasonal transport reduction. Then, we investigated the effects of the optimal initial errors estimated by the conditional nonlinear optimal perturbation (CNOP) approach on predicting the seasonal transport reduction. Two transport reduction events (denoted as Event 1 and Event 2) were chosen, and CNOP1 and CNOP2 were obtained for each event. By examining the spatial structures of the two types of CNOPs, we found that the dominant amplitudes are located around (128°E, 17°N) horizontally and in the upper 1000 m vertically. For each event, the two CNOPs caused large prediction errors. Specifically, at the prediction time, CNOP1 (CNOP2) develops into an anticyclonic (cyclonic) eddy-like structure centered around 124°E, leading to the increase (decrease) of the upstream Kuroshio transport. By investigating the time evolution of the CNOPs in Event 1, we found that the eddy-like structures originating from east of Luzon gradually grow and simultaneously propagate westward. The eddy-energetic analysis indicated that the errors obtain energy from the background state through barotropic and baroclinic instabilities and that the latter plays a more important role. These results suggest that improving the initial conditions in east of Luzon could lead to better prediction of the upstream Kuroshio transport variation.

  19. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action.

    PubMed

    Bissonette, Gregory B; Roesch, Matthew R

    2016-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.

  20. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action

    PubMed Central

    Roesch, Matthew R.

    2017-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036

  1. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    NASA Astrophysics Data System (ADS)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  2. Towards a model to predict macular dichromats' naming errors: effects of CIE saturation and dichromatism type.

    PubMed

    Lillo, J; Vitini, I; Caballero, A; Moreira, H

    2001-05-01

    Thirty macular dichromat children (12 protanopes + 18 deuteranopes) and 29 controls, between 5 and 9 years old, participated in a monolexemic denomination task. Their clinical status was determined after a repeated application of a chromatic test set (Ishihara, CUCVT, and TIDA). The stimuli to be named were 12 tiles from the Color-Aid set belonging to the green, blue, and purple basic categories. Results showed that: (a) Dichromats made more naming errors when low saturation stimuli were used; (b) protanopes made more errors that deuteranopes; and (c) pseudoisochromatic lines predicted accurately the type of most frequent naming errors but they underestimated macular dichromats' functional capacity to name colors. Results are consistent with a model of macular dichromats' vision that hypothesizes a residual third type of cone in the periphery of the retina. Implications of this fact for everyday use of colors by macular dichromats' and for the validity of standard clinical diagnoses are discussed.

  3. Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness.

    PubMed

    Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S

    2013-09-01

    Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.

  4. Dopamine prediction errors in reward learning and addiction: from theory to neural circuitry

    PubMed Central

    Keiflin, Ronald; Janak, Patricia H.

    2015-01-01

    Summary Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error-signaling and addiction can be formulated and tested. PMID:26494275

  5. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions

    NASA Astrophysics Data System (ADS)

    Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.

    2009-01-01

    SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.

  6. Light induced fluorescence for predicting API content in tablets: sampling and error.

    PubMed

    Domike, Reuben; Ngai, Samuel; Cooney, Charles L

    2010-05-31

    The use of a light induced fluorescence (LIF) instrument to estimate the total content of fluorescent active pharmaceutical ingredient in a tablet from surface sampling was demonstrated. Different LIF sampling strategies were compared to a total tablet ultraviolet (UV) absorbance test for each tablet. Testing was completed on tablets with triamterene as the active ingredient and on tablets with caffeine as the active ingredient, each with a range of concentrations. The LIF instrument accurately estimated the active ingredient within 10% of total tablet test greater than 95% of the time. The largest error amongst all of the tablets tested was 13%. The RMSEP between the techniques was in the range of 4.4-7.9%. Theory of the error associated with the surface sampling was developed and found to accurately predict the experimental error. This theory uses one empirically determined parameter: the deviation of estimations at different locations on the tablet surface. As this empirical parameter can be found rapidly, correct use of this prediction of error may reduce the effort required for calibration and validation studies of non-destructive surface measurement techniques, and thereby rapidly determine appropriate analytical techniques for estimating content uniformity in tablets.

  7. Mediofrontal event-related potentials in response to positive, negative and unsigned prediction errors.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2014-08-01

    Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an expected and obtained reward. There is evidence that the brain computes RPEs, but an outstanding question is whether positive RPEs ("better than expected") and negative RPEs ("worse than expected") are represented in a single integrated system. An electrophysiological component, feedback related negativity, has been claimed to encode an RPE but its relative sensitivity to the utility of positive and negative RPEs remains unclear. This study explored the question by varying the utility of positive and negative RPEs in a design that controlled for other closely related properties of feedback and could distinguish utility from salience. It revealed a mediofrontal sensitivity to utility, for positive RPEs at 275-310ms and for negative RPEs at 310-390ms. These effects were preceded and succeeded by a response consistent with an unsigned prediction error, or "salience" coding.

  8. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  9. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  10. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  11. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum

    PubMed Central

    Ziauddeen, Hisham; Vestergaard, Martin D.; Spencer, Tom

    2017-01-01

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that

  12. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.

    PubMed

    Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C

    2017-02-15

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness.SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that

  13. Belief about nicotine selectively modulates value and reward prediction error signals in smokers.

    PubMed

    Gu, Xiaosi; Lohrenz, Terry; Salas, Ramiro; Baldwin, Philip R; Soltani, Alireza; Kirk, Ulrich; Cinciripini, Paul M; Montague, P Read

    2015-02-24

    Little is known about how prior beliefs impact biophysically described processes in the presence of neuroactive drugs, which presents a profound challenge to the understanding of the mechanisms and treatments of addiction. We engineered smokers' prior beliefs about the presence of nicotine in a cigarette smoked before a functional magnetic resonance imaging session where subjects carried out a sequential choice task. Using a model-based approach, we show that smokers' beliefs about nicotine specifically modulated learning signals (value and reward prediction error) defined by a computational model of mesolimbic dopamine systems. Belief of "no nicotine in cigarette" (compared with "nicotine in cigarette") strongly diminished neural responses in the striatum to value and reward prediction errors and reduced the impact of both on smokers' choices. These effects of belief could not be explained by global changes in visual attention and were specific to value and reward prediction errors. Thus, by modulating the expression of computationally explicit signals important for valuation and choice, beliefs can override the physical presence of a potent neuroactive compound like nicotine. These selective effects of belief demonstrate that belief can modulate model-based parameters important for learning. The implications of these findings may be far ranging because belief-dependent effects on learning signals could impact a host of other behaviors in addiction as well as in other mental health problems.

  14. Prediction error as a linear function of reward probability is coded in human nucleus accumbens.

    PubMed

    Abler, Birgit; Walter, Henrik; Erk, Susanne; Kammerer, Hannes; Spitzer, Manfred

    2006-06-01

    Reward probability has been shown to be coded by dopamine neurons in monkeys. Phasic neuronal activation not only increased linearly with reward probability upon expectation of reward, but also varied monotonically across the range of probabilities upon omission or receipt of rewards, therefore modeling discrepancies between expected and received rewards. Such a discrete coding of prediction error has been suggested to be one of the basic principles of learning. We used functional magnetic resonance imaging (fMRI) to show that the human dopamine system codes reward probability and prediction error in a similar way. We used a simple delayed incentive task with a discrete range of reward probabilities from 0%-100%. Activity in the nucleus accumbens of human subjects strongly resembled the phasic responses found in monkey neurons. First, during the expectation period of the task, the fMRI signal in the human nucleus accumbens (NAc) increased linearly with the probability of the reward. Second, during the outcome phase, activity in the NAc coded the prediction error as a linear function of reward probabilities. Third, we found that the Nac signal was correlated with individual differences in sensation seeking and novelty seeking, indicating a link between individual fMRI activation of the dopamine system in a probabilistic paradigm and personality traits previously suggested to be linked with reward processing. We therefore identify two different covariates that model activity in the Nac: specific properties of a psychological task and individual character traits.

  15. Ventral striatal prediction error signaling is associated with dopamine synthesis capacity and fluid intelligence.

    PubMed

    Schlagenhauf, Florian; Rapp, Michael A; Huys, Quentin J M; Beck, Anne; Wüstenberg, Torsten; Deserno, Lorenz; Buchholz, Hans-Georg; Kalbitzer, Jan; Buchert, Ralph; Bauer, Michael; Kienast, Thorsten; Cumming, Paul; Plotkin, Michail; Kumakura, Yoshitaka; Grace, Anthony A; Dolan, Raymond J; Heinz, Andreas

    2013-06-01

    Fluid intelligence represents the capacity for flexible problem solving and rapid behavioral adaptation. Rewards drive flexible behavioral adaptation, in part via a teaching signal expressed as reward prediction errors in the ventral striatum, which has been associated with phasic dopamine release in animal studies. We examined a sample of 28 healthy male adults using multimodal imaging and biological parametric mapping with (1) functional magnetic resonance imaging during a reversal learning task and (2) in a subsample of 17 subjects also with positron emission tomography using 6-[(18) F]fluoro-L-DOPA to assess dopamine synthesis capacity. Fluid intelligence was measured using a battery of nine standard neuropsychological tests. Ventral striatal BOLD correlates of reward prediction errors were positively correlated with fluid intelligence and, in the right ventral striatum, also inversely correlated with dopamine synthesis capacity (FDOPA K inapp). When exploring aspects of fluid intelligence, we observed that prediction error signaling correlates with complex attention and reasoning. These findings indicate that individual differences in the capacity for flexible problem solving relate to ventral striatal activation during reward-related learning, which in turn proved to be inversely associated with ventral striatal dopamine synthesis capacity.

  16. Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning.

    PubMed

    Zhu, Lusha; Mathewson, Kyle E; Hsu, Ming

    2012-01-31

    Decision-making in the presence of other competitive intelligent agents is fundamental for social and economic behavior. Such decisions require agents to behave strategically, where in addition to learning about the rewards and punishments available in the environment, they also need to anticipate and respond to actions of others competing for the same rewards. However, whereas we know much about strategic learning at both theoretical and behavioral levels, we know relatively little about the underlying neural mechanisms. Here, we show using a multi-strategy competitive learning paradigm that strategic choices can be characterized by extending the reinforcement learning (RL) framework to incorporate agents' beliefs about the actions of their opponents. Furthermore, using this characterization to generate putative internal values, we used model-based functional magnetic resonance imaging to investigate neural computations underlying strategic learning. We found that the distinct notions of prediction errors derived from our computational model are processed in a partially overlapping but distinct set of brain regions. Specifically, we found that the RL prediction error was correlated with activity in the ventral striatum. In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning. These results suggest a model of strategic behavior where learning arises from interaction of dissociable reinforcement and belief-based inputs.

  17. Intrinsic absolute bioavailability prediction in rats based on in situ absorption rate constants and/or in vitro partition coefficients: 6-fluoroquinolones.

    PubMed

    Sánchez-Castaño, G; Ruíz-García, A; Bañón, N; Bermejo, M; Merino, V; Freixas, J; Garriguesx, T M; Plá-Delfina, J M

    2000-11-01

    A preliminary study attempting to predict the intrinsic absolute bioavailability of a group of antibacterial 6-fluoroquinolones-including true and imperfect homologues as well as heterologues-was carried out. The intrinsic absolute bioavailability of the test compounds, F, was assessed on permanently cannulated conscious rats by comparing the trapezoidal normalized areas under the plasma concentration-time curves obtained by intravenous and oral routes (n = 8-12). The high-performance liquid chromatography analytical methods used for plasma samples are described. Prediction of the absolute bioavailability of the compounds was based on their intrinsic rat gut in situ absorption rate constant, k(a). The working equation was: where T represents the mean absorbing time. A T value of 0.93 (+/-0.06) h provides the best correlation between predicted and experimentally obtained bioavailabilities (F' and F, respectively) when k(a) values are used (slope a = 1.10; intercept b = -0.05; r = 0.991). The k(a) values can also be expressed in function of the in vitro partition coefficients, P, between n-octanol and a phosphate buffer. In this case, theoretical k(a) values can be determined with the parameters of a standard k(a)/P correlation previously established for a group of model compounds. When P values are taken instead of k(a) values, reliable bioavailability predictions can also be made. These and other relevant features of the method are discussed.

  18. Standard error of inverse prediction for dose-response relationship: approximate and exact statistical inference.

    PubMed

    Demidenko, Eugene; Williams, Benjamin B; Flood, Ann Barry; Swartz, Harold M

    2013-05-30

    This paper develops a new metric, the standard error of inverse prediction (SEIP), for a dose-response relationship (calibration curve) when dose is estimated from response via inverse regression. SEIP can be viewed as a generalization of the coefficient of variation to regression problem when x is predicted using y-value. We employ nonstandard statistical methods to treat the inverse prediction, which has an infinite mean and variance due to the presence of a normally distributed variable in the denominator. We develop confidence intervals and hypothesis testing for SEIP on the basis of the normal approximation and using the exact statistical inference based on the noncentral t-distribution. We derive the power functions for both approaches and test them via statistical simulations. The theoretical SEIP, as the ratio of the regression standard error to the slope, is viewed as reciprocal of the signal-to-noise ratio, a popular measure of signal processing. The SEIP, as a figure of merit for inverse prediction, can be used for comparison of calibration curves with different dependent variables and slopes. We illustrate our theory with electron paramagnetic resonance tooth dosimetry for a rapid estimation of the radiation dose received in the event of nuclear terrorism.

  19. A Method for Selecting between Fisher's Linear Classification Functions and Least Absolute Deviation in Predictive Discriminant Analysis.

    ERIC Educational Resources Information Center

    Meshbane, Alice; Morris, John D.

    A method for comparing the cross-validated classification accuracy of Fisher's linear classification functions (FLCFs) and the least absolute deviation is presented under varying data conditions for the two-group classification problem. With this method, separate-group as well as total-sample proportions of current classifications can be compared…

  20. Filtering and Predicting Complex Nonlinear Turbulent Dynamical Systems with Model Error

    NASA Astrophysics Data System (ADS)

    Chen, Nan

    This dissertation includes five topics in filtering and predicting complex turbulent systems with model error from noisy partial observations. An efficient and accurate model calibration is the prerequisite of filtering and prediction. The first topic involves adopting Bayesian inference that incorporates data augmentation in a Markov chain Monte Carlo algorithm to estimate the parameters in a reduced model that describes nature with hidden instability. A novel pre-estimation of hidden processes greatly enhances the efficiency of the algorithm. The model equipped with the estimated parameters succeeds in predicting the extreme events in nature. The filtering and prediction of the Madden-Julian oscillation (MJO) and relevant tropical waves have significant implications for extended range forecasting. A physics-constrained low-order nonlinear stochastic model involving correlated multiplicative noise defined through energy conserving nonlinear interaction is developed to predict two MJO indices with different features. The special structure of the model allows efficient data assimilation and ensemble initialization algorithms for the hidden variables. Utilizing an information-theoretic framework for model calibration, the model has significant skill for determining the predictability limits of the MJO. Filtering the stochastic skeleton model for the MJO with noisy partial observations is another central topic. A nonlinear filter, which captures the inherent nonlinearity of the system, is proposed and judicious model error is included. An effectively balanced reduced filter involving a simple fast-wave averaging strategy is developed, which facilitates filtering the moisture and other fast-oscillating modes and enhances the total computational efficiency. Both filters succeed in filtering the MJO and other large-scale features. The last two topics focus on filtering complex turbulent systems within a conditional Gaussian framework. Despite the conditional Gaussianity

  1. Putting reward in art: A tentative prediction error account of visual art

    PubMed Central

    Van de Cruys, Sander; Wagemans, Johan

    2011-01-01

    The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations. PMID:23145260

  2. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  3. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  4. Short-term predictions by statistical methods in regions of varying dynamical error growth in a chaotic system

    NASA Astrophysics Data System (ADS)

    Mittal, A. K.; Singh, U. P.; Tiwari, A.; Dwivedi, S.; Joshi, M. K.; Tripathi, K. C.

    2015-08-01

    In a nonlinear, chaotic dynamical system, there are typically regions in which an infinitesimal error grows and regions in which it decays. If the observer does not know the evolution law, recourse is taken to non-dynamical methods, which use the past values of the observables to fit an approximate evolution law. This fitting can be local, based on past values in the neighborhood of the present value as in the case of Farmer-Sidorowich (FS) technique, or it can be global, based on all past values, as in the case of Artificial Neural Networks (ANN). Short-term predictions are then made using the approximate local or global mapping so obtained. In this study, the dependence of statistical prediction errors on dynamical error growth rates is explored using the Lorenz-63 model. The regions of dynamical error growth and error decay are identified by the bred vector growth rates or by the eigenvalues of the symmetric Jacobian matrix. The prediction errors by the FS and ANN techniques in these two regions are compared. It is found that the prediction errors by statistical methods do not depend on the dynamical error growth rate. This suggests that errors using statistical methods are independent of the dynamical situation and the statistical methods may be potentially advantageous over dynamical methods in regions of low dynamical predictability.

  5. Error estimates for density-functional theory predictions of surface energy and work function

    NASA Astrophysics Data System (ADS)

    De Waele, Sam; Lejaeghere, Kurt; Sluydts, Michael; Cottenier, Stefaan

    2016-12-01

    Density-functional theory (DFT) predictions of materials properties are becoming ever more widespread. With increased use comes the demand for estimates of the accuracy of DFT results. In view of the importance of reliable surface properties, this work calculates surface energies and work functions for a large and diverse test set of crystalline solids. They are compared to experimental values by performing a linear regression, which results in a measure of the predictable and material-specific error of the theoretical result. Two of the most prevalent functionals, the local density approximation (LDA) and the Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA), are evaluated and compared. Both LDA and GGA-PBE are found to yield accurate work functions with error bars below 0.3 eV, rivaling the experimental precision. LDA also provides satisfactory estimates for the surface energy with error bars smaller than 10%, but GGA-PBE significantly underestimates the surface energy for materials with a large correlation energy.

  6. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex.

    PubMed

    Jiang, Jiefeng; Summerfield, Christopher; Egner, Tobias

    2016-12-14

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might "spread" from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision.

  7. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  8. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg

    2015-07-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.

  9. The role of nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" for ENSO

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng; Hu, Junya; Xu, Hui

    2016-12-01

    With the Zebiak-Cane model, the present study investigates the role of model errors represented by the nonlinear forcing singular vector (NFSV) in the "spring predictability barrier" (SPB) phenomenon in ENSO prediction. The NFSV-related model errors are found to have the largest negative effect on the uncertainties of El Ni˜no prediction and they can be classified into two types: the first is featured with a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite to the first type. The first type of error tends to have the worst effects on El Ni˜no growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSVrelated errors exhibits prominent seasonality, with the fastest error growth in spring and/or summer; hence, these errors result in a significant SPB related to El Ni˜no events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate an SPB for El Ni˜no events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Ni˜no events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Ni˜no predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve

  10. Hierarchy of prediction errors for auditory events in human temporal and frontal cortex

    PubMed Central

    Dürschmid, Stefan; Edwards, Erik; Reichert, Christoph; Dewar, Callum; Hinrichs, Hermann; Heinze, Hans-Jochen; Kirsch, Heidi E.; Dalal, Sarang S.; Deouell, Leon Y.; Knight, Robert T.

    2016-01-01

    Predictive coding theories posit that neural networks learn statistical regularities in the environment for comparison with actual outcomes, signaling a prediction error (PE) when sensory deviation occurs. PE studies in audition have capitalized on low-frequency event-related potentials (LF-ERPs), such as the mismatch negativity. However, local cortical activity is well-indexed by higher-frequency bands [high-γ band (Hγ): 80–150 Hz]. We compared patterns of human Hγ and LF-ERPs in deviance detection using electrocorticographic recordings from subdural electrodes over frontal and temporal cortices. Patients listened to trains of task-irrelevant tones in two conditions differing in the predictability of a deviation from repetitive background stimuli (fully predictable vs. unpredictable deviants). We found deviance-related responses in both frequency bands over lateral temporal and inferior frontal cortex, with an earlier latency for Hγ than for LF-ERPs. Critically, frontal Hγ activity but not LF-ERPs discriminated between fully predictable and unpredictable changes, with frontal cortex sensitive to unpredictable events. The results highlight the role of frontal cortex and Hγ activity in deviance detection and PE generation. PMID:27247381

  11. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    NASA Technical Reports Server (NTRS)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  12. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    NASA Astrophysics Data System (ADS)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  13. Absolute Measurements of Macrophage Migration Inhibitory Factor and Interleukin-1-β mRNA Levels Accurately Predict Treatment Response in Depressed Patients

    PubMed Central

    Ferrari, Clarissa; Uher, Rudolf; Bocchio-Chiavetto, Luisella; Riva, Marco Andrea; Pariante, Carmine M.

    2016-01-01

    Background: Increased levels of inflammation have been associated with a poorer response to antidepressants in several clinical samples, but these findings have had been limited by low reproducibility of biomarker assays across laboratories, difficulty in predicting response probability on an individual basis, and unclear molecular mechanisms. Methods: Here we measured absolute mRNA values (a reliable quantitation of number of molecules) of Macrophage Migration Inhibitory Factor and interleukin-1β in a previously published sample from a randomized controlled trial comparing escitalopram vs nortriptyline (GENDEP) as well as in an independent, naturalistic replication sample. We then used linear discriminant analysis to calculate mRNA values cutoffs that best discriminated between responders and nonresponders after 12 weeks of antidepressants. As Macrophage Migration Inhibitory Factor and interleukin-1β might be involved in different pathways, we constructed a protein-protein interaction network by the Search Tool for the Retrieval of Interacting Genes/Proteins. Results: We identified cutoff values for the absolute mRNA measures that accurately predicted response probability on an individual basis, with positive predictive values and specificity for nonresponders of 100% in both samples (negative predictive value=82% to 85%, sensitivity=52% to 61%). Using network analysis, we identified different clusters of targets for these 2 cytokines, with Macrophage Migration Inhibitory Factor interacting predominantly with pathways involved in neurogenesis, neuroplasticity, and cell proliferation, and interleukin-1β interacting predominantly with pathways involved in the inflammasome complex, oxidative stress, and neurodegeneration. Conclusion: We believe that these data provide a clinically suitable approach to the personalization of antidepressant therapy: patients who have absolute mRNA values above the suggested cutoffs could be directed toward earlier access to more

  14. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain.

    PubMed

    Niv, Yael; Edlund, Jeffrey A; Dayan, Peter; O'Doherty, John P

    2012-01-11

    Humans and animals are exquisitely, though idiosyncratically, sensitive to risk or variance in the outcomes of their actions. Economic, psychological, and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional models of such reinforcement learning focus on learning about the mean reward value of cues and ignore higher order moments such as variance. We used fMRI to test whether the neural correlates of human reinforcement learning are sensitive to experienced risk. Our analysis focused on anatomically delineated regions of a priori interest in the nucleus accumbens, where blood oxygenation level-dependent (BOLD) signals have been suggested as correlating with quantities derived from reinforcement learning. We first provide unbiased evidence that the raw BOLD signal in these regions corresponds closely to a reward prediction error. We then derive from this signal the learned values of cues that predict rewards of equal mean but different variance and show that these values are indeed modulated by experienced risk. Moreover, a close neurometric-psychometric coupling exists between the fluctuations of the experience-based evaluations of risky options that we measured neurally and the fluctuations in behavioral risk aversion. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice, neuroscientific models of affective learning, and the workings of the underlying neural mechanisms.

  15. Prediction error and trace dominance determine the fate of fear memories after post-training manipulations

    PubMed Central

    Alfei, Joaquín M.; Ferrer Monti, Roque I.; Molina, Victor A.; Bueno, Adrián M.

    2015-01-01

    Different mnemonic outcomes have been observed when associative memories are reactivated by CS exposure and followed by amnestics. These outcomes include mere retrieval, destabilization–reconsolidation, a transitional period (which is insensitive to amnestics), and extinction learning. However, little is known about the interaction between initial learning conditions and these outcomes during a reinforced or nonreinforced reactivation. Here we systematically combined temporally specific memories with different reactivation parameters to observe whether these four outcomes are determined by the conditions established during training. First, we validated two training regimens with different temporal expectations about US arrival. Then, using Midazolam (MDZ) as an amnestic agent, fear memories in both learning conditions were submitted to retraining either under identical or different parameters to the original training. Destabilization (i.e., susceptibly to MDZ) occurred when reactivation was reinforced, provided the occurrence of a temporal prediction error about US arrival. In subsequent experiments, both treatments were systematically reactivated by nonreinforced context exposure of different lengths, which allowed to explore the interaction between training and reactivation lengths. These results suggest that temporal prediction error and trace dominance determine the extent to which reactivation produces the different outcomes. PMID:26179232

  16. Delusions and prediction error: clarifying the roles of behavioural and brain responses

    PubMed Central

    Corlett, Philip Robert; Fletcher, Paul Charles

    2015-01-01

    Griffiths and colleagues provided a clear and thoughtful review of the prediction error model of delusion formation [Cognitive Neuropsychiatry, 2014 April 4 (Epub ahead of print)]. As well as reviewing the central ideas and concluding that the existing evidence base is broadly supportive of the model, they provide a detailed critique of some of the experiments that we have performed to study it. Though they conclude that the shortcomings that they identify in these experiments do not fundamentally challenge the prediction error model, we nevertheless respond to these criticisms. We begin by providing a more detailed outline of the model itself as there are certain important aspects of it that were not covered in their review. We then respond to their specific criticisms of the empirical evidence. We defend the neuroimaging contrasts that we used to explore this model of psychosis arguing that, while any single contrast entails some ambiguity, our assumptions have been justified by our extensive background work before and since. PMID:25559871

  17. Episodic memory encoding interferes with reward learning and decreases striatal prediction errors.

    PubMed

    Wimmer, G Elliott; Braun, Erin Kendall; Daw, Nathaniel D; Shohamy, Daphna

    2014-11-05

    Learning is essential for adaptive decision making. The striatum and its dopaminergic inputs are known to support incremental reward-based learning, while the hippocampus is known to support encoding of single events (episodic memory). Although traditionally studied separately, in even simple experiences, these two types of learning are likely to co-occur and may interact. Here we sought to understand the nature of this interaction by examining how incremental reward learning is related to concurrent episodic memory encoding. During the experiment, human participants made choices between two options (colored squares), each associated with a drifting probability of reward, with the goal of earning as much money as possible. Incidental, trial-unique object pictures, unrelated to the choice, were overlaid on each option. The next day, participants were given a surprise memory test for these pictures. We found that better episodic memory was related to a decreased influence of recent reward experience on choice, both within and across participants. fMRI analyses further revealed that during learning the canonical striatal reward prediction error signal was significantly weaker when episodic memory was stronger. This decrease in reward prediction error signals in the striatum was associated with enhanced functional connectivity between the hippocampus and striatum at the time of choice. Our results suggest a mechanism by which memory encoding may compete for striatal processing and provide insight into how interactions between different forms of learning guide reward-based decision making.

  18. Absolute Zero

    NASA Astrophysics Data System (ADS)

    Donnelly, Russell J.; Sheibley, D.; Belloni, M.; Stamper-Kurn, D.; Vinen, W. F.

    2006-12-01

    Absolute Zero is a two hour PBS special attempting to bring to the general public some of the advances made in 400 years of thermodynamics. It is based on the book “Absolute Zero and the Conquest of Cold” by Tom Shachtman. Absolute Zero will call long-overdue attention to the remarkable strides that have been made in low-temperature physics, a field that has produced 27 Nobel Prizes. It will explore the ongoing interplay between science and technology through historical examples including refrigerators, ice machines, frozen foods, liquid oxygen and nitrogen as well as much colder fluids such as liquid hydrogen and liquid helium. A website has been established to promote the series: www.absolutezerocampaign.org. It contains information on the series, aimed primarily at students at the middle school level. There is a wealth of material here and we hope interested teachers will draw their student’s attention to this website and its substantial contents, which have been carefully vetted for accuracy.

  19. A two-dimensional matrix correction for off-axis portal dose prediction errors

    SciTech Connect

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As in

  20. Prediction error and somatosensory insula activation in women recovered from anorexia nervosa

    PubMed Central

    Frank, Guido K.W.; Collier, Shaleise; Shott, Megan E.; O’Reilly, Randall C.

    2016-01-01

    Background Previous research in patients with anorexia nervosa showed heightened brain response during a taste reward conditioning task and heightened sensitivity to rewarding and punishing stimuli. Here we tested the hypothesis that individuals recovered from anorexia nervosa would also experience greater brain activation during this task as well as higher sensitivity to salient stimuli than controls. Methods Women recovered from restricting-type anorexia nervosa and healthy control women underwent fMRI during application of a prediction error taste reward learning paradigm. Results Twenty-four women recovered from anorexia nervosa (mean age 30.3 ± 8.1 yr) and 24 control women (mean age 27.4 ± 6.3 yr) took part in this study. The recovered anorexia nervosa group showed greater left posterior insula activation for the prediction error model analysis than the control group (family-wise error– and small volume–corrected p < 0.05). A group × condition analysis found greater posterior insula response in women recovered from anorexia nervosa than controls for unexpected stimulus omission, but not for unexpected receipt. Sensitivity to punishment was elevated in women recovered from anorexia nervosa. Limitations This was a cross-sectional study, and the sample size was modest. Conclusion Anorexia nervosa after recovery is associated with heightened prediction error–related brain response in the posterior insula as well as greater response to unexpected reward stimulus omission. This finding, together with behaviourally increased sensitivity to punishment, could indicate that individuals recovered from anorexia nervosa are particularly responsive to punishment. The posterior insula processes somatosensory stimuli, including unexpected bodily states, and greater response could indicate altered perception or integration of unexpected or maybe unwanted bodily feelings. Whether those findings develop during the ill state or whether they are biological traits requires

  1. Speech intelligibility index predictions for young and old listeners in automobile noise: Can the index be improved by incorporating factors other than absolute threshold?

    NASA Astrophysics Data System (ADS)

    Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don

    2003-10-01

    While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.

  2. The feedback-related negativity reflects “more or less” prediction error in appetitive and aversive conditions

    PubMed Central

    Huang, Yi; Yu, Rongjun

    2014-01-01

    Humans make predictions and use feedback to update their subsequent predictions. The feedback-related negativity (FRN) has been found to be sensitive to negative feedback as well as negative prediction error, such that the FRN is larger for outcomes that are worse than expected. The present study examined prediction errors in both appetitive and aversive conditions. We found that the FRN was more negative for reward omission vs. wins and for loss omission vs. losses, suggesting that the FRN might classify outcomes in a “more-or-less than expected” fashion rather than in the “better-or-worse than expected” dimension. Our findings challenge the previous notion that the FRN only encodes negative feedback and “worse than expected” negative prediction error. PMID:24904254

  3. Testing alternative uses of electromagnetic data to reduce the prediction error of groundwater models

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Christensen, Steen; Ferre, Ty Paul A.

    2016-05-01

    In spite of geophysics being used increasingly, it is often unclear how and when the integration of geophysical data and models can best improve the construction and predictive capability of groundwater models. This paper uses a newly developed HYdrogeophysical TEst-Bench (HYTEB) that is a collection of geological, groundwater and geophysical modeling and inversion software to demonstrate alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity (clay). The synthetic 3-D reference system is designed so that there is a perfect relationship between hydraulic conductivity and electrical resistivity. For this system it is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by (in most cases) geophysics-based regularization. For the studied system and inversion approaches it is found that resistivities estimated by sequential hydrogeophysical inversion (SHI) or joint hydrogeophysical inversion (JHI) should be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. The limited groundwater model improvement obtained by using the geophysical data probably mainly arises from the way these data are used here: the alternative inversion approaches propagate geophysical estimation errors into the hydrologic model parameters. It was expected that JHI would compensate for this, but the hydrologic data were apparently insufficient to secure such compensation. With respect to reducing model prediction error, it depends on the type

  4. Temporal dynamics of prediction error processing during reward-based decision making.

    PubMed

    Philiastides, Marios G; Biele, Guido; Vavatzanidis, Niki; Kazzer, Philipp; Heekeren, Hauke R

    2010-10-15

    Adaptive decision making depends on the accurate representation of rewards associated with potential choices. These representations can be acquired with reinforcement learning (RL) mechanisms, which use the prediction error (PE, the difference between expected and received rewards) as a learning signal to update reward expectations. While EEG experiments have highlighted the role of feedback-related potentials during performance monitoring, important questions about the temporal sequence of feedback processing and the specific function of feedback-related potentials during reward-based decision making remain. Here, we hypothesized that feedback processing starts with a qualitative evaluation of outcome-valence, which is subsequently complemented by a quantitative representation of PE magnitude. Results of a model-based single-trial analysis of EEG data collected during a reversal learning task showed that around 220ms after feedback outcomes are initially evaluated categorically with respect to their valence (positive vs. negative). Around 300ms, and parallel to the maintained valence-evaluation, the brain also represents quantitative information about PE magnitude, thus providing the complete information needed to update reward expectations and to guide adaptive decision making. Importantly, our single-trial EEG analysis based on PEs from an RL model showed that the feedback-related potentials do not merely reflect error awareness, but rather quantitative information crucial for learning reward contingencies.

  5. Predicting the geographic distribution of a species from presence-only data subject to detection errors

    USGS Publications Warehouse

    Dorazio, Robert M.

    2012-01-01

    Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.

  6. The modulation of savouring by prediction error and its effects on choice

    PubMed Central

    Iigaya, Kiyohito; Story, Giles W; Kurth-Nelson, Zeb; Dolan, Raymond J; Dayan, Peter

    2016-01-01

    When people anticipate uncertain future outcomes, they often prefer to know their fate in advance. Inspired by an idea in behavioral economics that the anticipation of rewards is itself attractive, we hypothesized that this preference of advance information arises because reward prediction errors carried by such information can boost the level of anticipation. We designed new empirical behavioral studies to test this proposal, and confirmed that subjects preferred advance reward information more strongly when they had to wait for rewards for a longer time. We formulated our proposal in a reinforcement-learning model, and we showed that our model could account for a wide range of existing neuronal and behavioral data, without appealing to ambiguous notions such as an explicit value for information. We suggest that such boosted anticipation significantly drives risk-seeking behaviors, most pertinently in gambling. DOI: http://dx.doi.org/10.7554/eLife.13747.001 PMID:27101365

  7. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    SciTech Connect

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

  8. On the development and application of a continuous-discrete recursive prediction error algorithm.

    PubMed

    Stigter, J D; Beck, M B

    2004-10-01

    Recursive state and parameter reconstruction is a well-established field in control theory. In the current paper we derive a continuous-discrete version of recursive prediction error algorithm and apply the filter in an environmental and biological setting as a possible alternative to the well-known extended Kalman filter. The framework from which the derivation is started is the so-called 'innovations-format' of the (continuous time) system model, including (discrete time) measurements. After the algorithm has been motivated and derived, it is subsequently applied to hypothetical and 'real-life' case studies including reconstruction of biokinetic parameters and parameters characterizing the dynamics of a river in the United Kingdom. Advantages and characteristics of the method are discussed.

  9. Prediction error minimization: Implications for Embodied Cognition and the Extended Mind Hypothesis.

    PubMed

    de Bruin, Leon; Michael, John

    2017-03-01

    Over the past few years, the prediction error minimization (PEM) framework has increasingly been gaining ground throughout the cognitive sciences. A key issue dividing proponents of PEM is how we should conceptualize the relation between brain, body and environment. Clark advocates a version of PEM which retains, at least to a certain extent, his prior commitments to Embodied Cognition and to the Extended Mind Hypothesis. Hohwy, by contrast, presents a sustained argument that PEM actually rules out at least some versions of Embodied and Extended cognition. The aim of this paper is to facilitate a constructive debate between these two competing alternatives by explicating the different theoretical motivations underlying them, and by homing in on the relevant issues that may help to adjudicate between them.

  10. Improving filtering and prediction of spatially extended turbulent systems with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates

  11. Quantifying uncertainty for predictions with model error in non-Gaussian systems with intermittency

    NASA Astrophysics Data System (ADS)

    Branicki, Michal; Majda, Andrew J.

    2012-09-01

    This paper discusses a range of important mathematical issues arising in applications of a newly emerging stochastic-statistical framework for quantifying and mitigating uncertainties associated with prediction of partially observed and imperfectly modelled complex turbulent dynamical systems. The need for such a framework is particularly severe in climate science where the true climate system is vastly more complicated than any conceivable model; however, applications in other areas, such as neural networks and materials science, are just as important. The mathematical tools employed here rely on empirical information theory and fluctuation-dissipation theorems (FDTs) and it is shown that they seamlessly combine into a concise systematic framework for measuring and optimizing consistency and sensitivity of imperfect models. Here, we utilize a simple statistically exactly solvable ‘perfect’ system with intermittent hidden instabilities and with time-periodic features to address a number of important issues encountered in prediction of much more complex dynamical systems. These problems include the role and mitigation of model error due to coarse-graining, moment closure approximations, and the memory of initial conditions in producing short, medium and long-range predictions. Importantly, based on a suite of increasingly complex imperfect models of the perfect test system, we show that the predictive skill of the imperfect models and their sensitivity to external perturbations is improved by ensuring their consistency on the statistical attractor (i.e. the climate) with the perfect system. Furthermore, the discussed link between climate fidelity and sensitivity via the FDT opens up an enticing prospect of developing techniques for improving imperfect model sensitivity based on specific tests carried out in the training phase of the unperturbed statistical equilibrium/climate.

  12. Absolute Photometry

    NASA Astrophysics Data System (ADS)

    Hartig, George

    1990-12-01

    The absolute sensitivity of the FOS will be determined in SV by observing 2 stars at 3 epochs, first in 3 apertures (1.0", 0.5", and 0.3" circular) and then in 1 aperture (1.0" circular). In cycle 1, one star, BD+28D4211 will be observed in the 1.0" aperture to establish the stability of the sensitivity and flat field characteristics and improve the accuracy obtained in SV. This star will also be observed through the paired apertures since these are not calibrated in SV. The stars will be observed in most detector/grating combinations. The data will be averaged to form the inverse sensitivity functions required by RSDP.

  13. Absolute neutrino mass scale

    NASA Astrophysics Data System (ADS)

    Capelli, Silvia; Di Bari, Pasquale

    2013-04-01

    Neutrino oscillation experiments firmly established non-vanishing neutrino masses, a result that can be regarded as a strong motivation to extend the Standard Model. In spite of being the lightest massive particles, neutrinos likely represent an important bridge to new physics at very high energies and offer new opportunities to address some of the current cosmological puzzles, such as the matter-antimatter asymmetry of the Universe and Dark Matter. In this context, the determination of the absolute neutrino mass scale is a key issue within modern High Energy Physics. The talks in this parallel session well describe the current exciting experimental activity aiming to determining the absolute neutrino mass scale and offer an overview of a few models beyond the Standard Model that have been proposed in order to explain the neutrino masses giving a prediction for the absolute neutrino mass scale and solving the cosmological puzzles.

  14. The fate of memory: Reconsolidation and the case of Prediction Error.

    PubMed

    Fernández, Rodrigo S; Boccia, Mariano M; Pedreira, María E

    2016-09-01

    The ability to make predictions based on stored information is a general coding strategy. A Prediction-Error (PE) is a mismatch between expected and current events. It was proposed as the process by which memories are acquired. But, our memories like ourselves are subject to change. Thus, an acquired memory can become active and update its content or strength by a labilization-reconsolidation process. Within the reconsolidation framework, PE drives the updating of consolidated memories. Moreover, memory features, such as strength and age, are crucial boundary conditions that limit the initiation of the reconsolidation process. In order to disentangle these boundary conditions, we review the role of surprise, classical models of conditioning, and their neural correlates. Several forms of PE were found to be capable of inducing memory labilization-reconsolidation. Notably, many of the PE findings mirror those of memory-reconsolidation, suggesting a strong link between these signals and memory process. Altogether, the aim of the present work is to integrate a psychological and neuroscientific analysis of PE into a general framework for memory-reconsolidation.

  15. The Method for Calculating Atmospheric Drag Coefficient Based on the Characteristics of Along-track Error in LEO Orbit Prediction

    NASA Astrophysics Data System (ADS)

    Wang, H. B.; Zhao, C. Y.; Liu, Z. G.; Zhang, W.

    2016-07-01

    The errors of atmosphere density model and drag coefficient are the major factors to restrain the accuracy of orbit prediction for the LEO (Low Earth Orbit) objects, which would affect unfavorably the space missions that need a high-precision orbit. This paper brings out a new method for calculating the drag coefficient based on the divergence laws of prediction error's along-track component. Firstly, we deduce the expression of along-track error in LEO's orbit prediction, revealing the comprehensive effect of the initial orbit and model's errors in the along-track direction. According to this expression, we work out a suitable drag coefficient adopted in prediction step on the basis of some certain information from orbit determination step, which will limit the increasing rate of along-track error and reduce the largest error in this direction, then achieving the goal of improving the accuracy of orbit prediction. In order to verify the method's accuracy and successful rate in the practice of orbit prediction, we use the full-arcs high precision position data from the GPS receiver on GRACE-A. The result shows that this new method can significantly improve the accuracy of prediction by about 45%, achieving a successful rate of about 71% and an effective rate of about 86%, with respect to classical method which uses the fitted drag coefficient directly from orbit determination step. Furthermore, the new method shows a preferable application value, because it is effective for low, moderate, and high solar radiation levels, as well as some quiet and moderate geomagnetic activity condition.

  16. Impact of Parameter Uncertainty, Variability, and Conceptual Model Errors on Predictions of Flow Through Fractured Porous Media

    SciTech Connect

    S. Finsterle

    2000-09-07

    Model predictions are affected by uncertainty in input parameters, stochastic variability in formation properties, computational roundoff and cancellation errors, and errors in the conceptual model. The source, nature, and relative magnitude of these errors vary considerably, depending on the physical processes involved, the quality and amount of available characterization data, and the overall objective of the study. We examined various types of uncertainties and their propagation with a predictive model that simulates a water pulse flowing through an unsaturated, fractured porous medium. The propagation of the water pulse depends not only on the hydraulic properties of the fracture network, but also on the strength of fracture-matrix interactions and the storage capacity of the matrix. Different predicted variables (such as local saturation changes, total amount of water retarded in the matrix, or first arrival of water at a certain depth) depend on different parameters and thus show different uncertainty structures. The strong nonlinearities inherent in such a system require the use of Monte Carlo simulations. These simulations investigate the spread of model predictions as a result of changes in spatial variability and uncertainty in key input parameters. We also discuss the role of conceptual-model formulation and parameter estimation in the development of reliable prediction models. We observe that systematic errors in the conceptual model often render probabilistic uncertainty analyses meaningless if not misleading. Nevertheless, sensitivity analyses provide useful insight into the system behavior and help design experiments that eventually would reduce prediction uncertainties.

  17. Retention time prediction in temperature-programmed, comprehensive two-dimensional gas chromatography: modeling and error assessment.

    PubMed

    Barcaru, Andrei; Anroedh-Sampat, Andjoe; Janssen, Hans-Gerd; Vivó-Truyols, Gabriel

    2014-11-14

    In this paper we present a model relating experimental factors (column lengths, diameters and thickness, modulation times, pressures and temperature programs) with retention times. Unfortunately, an analytical solution to calculate the retention in temperature programmed GC × GC is impossible, making thus necessary to perform a numerical integration. In this paper we present a computational physical model of GC × GC, capable of predicting with a high accuracy retention times in both dimensions. Once fitted (e.g., calibrated), the model is used to make predictions, which are always subject to error. In this way, the prediction can result rather in a probability distribution of (predicted) retention times than in a fixed (most likely) value. One of the most common problems that can occur when fitting unknown parameters using experimental data is overfitting. In order to detect overfitting situations and assess the error, the K-fold cross-validation technique was applied. Another technique of error assessment proposed in this article is the use of error propagation using Jacobians. This method is based on estimation of the accuracy of the model by the partial derivatives of the retention time prediction with respect to the fitted parameters (in this case entropy and enthalpy for each component) in a set of given conditions. By treating the predictions of the model in terms of intervals rather than as precise values, it is possible to considerably increase the robustness of any optimization algorithm.

  18. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  19. Toward a better understanding on the role of prediction error on memory processes: From bench to clinic.

    PubMed

    Krawczyk, María C; Fernández, Rodrigo S; Pedreira, María E; Boccia, Mariano M

    2016-12-23

    Experimental psychology defines Prediction Error (PE) as a mismatch between expected and current events. It represents a unifier concept within the memory field, as it is the driving force of memory acquisition and updating. Prediction error induces updating of consolidated memories in strength or content by memory reconsolidation. This process has two different neurobiological phases, which involves the destabilization (labilization) of a consolidated memory followed by its restabilization. The aim of this work is to emphasize the functional role of PE on the neurobiology of learning and memory, integrating and discussing different research areas: behavioral, neurobiological, computational and clinical psychiatry.

  20. Inferring reward prediction errors in patients with schizophrenia: a dynamic reward task for reinforcement learning.

    PubMed

    Li, Chia-Tzu; Lai, Wen-Sung; Liu, Chih-Min; Hsu, Yung-Fong

    2014-01-01

    Abnormalities in the dopamine system have long been implicated in explanations of reinforcement learning and psychosis. The updated reward prediction error (RPE)-a discrepancy between the predicted and actual rewards-is thought to be encoded by dopaminergic neurons. Dysregulation of dopamine systems could alter the appraisal of stimuli and eventually lead to schizophrenia. Accordingly, the measurement of RPE provides a potential behavioral index for the evaluation of brain dopamine activity and psychotic symptoms. Here, we assess two features potentially crucial to the RPE process, namely belief formation and belief perseveration, via a probability learning task and reinforcement-learning modeling. Forty-five patients with schizophrenia [26 high-psychosis and 19 low-psychosis, based on their p1 and p3 scores in the positive-symptom subscales of the Positive and Negative Syndrome Scale (PANSS)] and 24 controls were tested in a feedback-based dynamic reward task for their RPE-related decision making. While task scores across the three groups were similar, matching law analysis revealed that the reward sensitivities of both psychosis groups were lower than that of controls. Trial-by-trial data were further fit with a reinforcement learning model using the Bayesian estimation approach. Model fitting results indicated that both psychosis groups tend to update their reward values more rapidly than controls. Moreover, among the three groups, high-psychosis patients had the lowest degree of choice perseveration. Lumping patients' data together, we also found that patients' perseveration appears to be negatively correlated (p = 0.09, trending toward significance) with their PANSS p1 + p3 scores. Our method provides an alternative for investigating reward-related learning and decision making in basic and clinical settings.

  1. Inferring reward prediction errors in patients with schizophrenia: a dynamic reward task for reinforcement learning

    PubMed Central

    Li, Chia-Tzu; Lai, Wen-Sung; Liu, Chih-Min; Hsu, Yung-Fong

    2014-01-01

    Abnormalities in the dopamine system have long been implicated in explanations of reinforcement learning and psychosis. The updated reward prediction error (RPE)—a discrepancy between the predicted and actual rewards—is thought to be encoded by dopaminergic neurons. Dysregulation of dopamine systems could alter the appraisal of stimuli and eventually lead to schizophrenia. Accordingly, the measurement of RPE provides a potential behavioral index for the evaluation of brain dopamine activity and psychotic symptoms. Here, we assess two features potentially crucial to the RPE process, namely belief formation and belief perseveration, via a probability learning task and reinforcement-learning modeling. Forty-five patients with schizophrenia [26 high-psychosis and 19 low-psychosis, based on their p1 and p3 scores in the positive-symptom subscales of the Positive and Negative Syndrome Scale (PANSS)] and 24 controls were tested in a feedback-based dynamic reward task for their RPE-related decision making. While task scores across the three groups were similar, matching law analysis revealed that the reward sensitivities of both psychosis groups were lower than that of controls. Trial-by-trial data were further fit with a reinforcement learning model using the Bayesian estimation approach. Model fitting results indicated that both psychosis groups tend to update their reward values more rapidly than controls. Moreover, among the three groups, high-psychosis patients had the lowest degree of choice perseveration. Lumping patients' data together, we also found that patients' perseveration appears to be negatively correlated (p = 0.09, trending toward significance) with their PANSS p1 + p3 scores. Our method provides an alternative for investigating reward-related learning and decision making in basic and clinical settings. PMID:25426091

  2. Altered neural reward and loss processing and prediction error signalling in depression.

    PubMed

    Ubl, Bettina; Kuehner, Christine; Kirsch, Peter; Ruttorf, Michaela; Diener, Carsten; Flor, Herta

    2015-08-01

    Dysfunctional processing of reward and punishment may play an important role in depression. However, functional magnetic resonance imaging (fMRI) studies have shown heterogeneous results for reward processing in fronto-striatal regions. We examined neural responsivity associated with the processing of reward and loss during anticipation and receipt of incentives and related prediction error (PE) signalling in depressed individuals. Thirty medication-free depressed persons and 28 healthy controls performed an fMRI reward paradigm. Regions of interest analyses focused on neural responses during anticipation and receipt of gains and losses and related PE-signals. Additionally, we assessed the relationship between neural responsivity during gain/loss processing and hedonic capacity. When compared with healthy controls, depressed individuals showed reduced fronto-striatal activity during anticipation of gains and losses. The groups did not significantly differ in response to reward and loss outcomes. In depressed individuals, activity increases in the orbitofrontal cortex and nucleus accumbens during reward anticipation were associated with hedonic capacity. Depressed individuals showed an absence of reward-related PEs but encoded loss-related PEs in the ventral striatum. Depression seems to be linked to blunted responsivity in fronto-striatal regions associated with limited motivational responses for rewards and losses. Alterations in PE encoding might mirror blunted reward- and enhanced loss-related associative learning in depression.

  3. Individual differences in reward-prediction-error: extraversion and feedback-related negativity.

    PubMed

    Smillie, Luke D; Cooper, Andrew J; Pickering, Alan D

    2011-10-01

    Medial frontal scalp-recorded negativity occurring ∼200-300 ms post-stimulus [known as feedback-related negativity (FRN)] is attenuated following unpredicted reward and potentiated following unpredicted non-reward. This encourages the view that FRN may partly reflect dopaminergic 'reward-prediction-error' signalling. We examined the influence of a putatively dopamine-based personality trait, extraversion (N = 30), and a dopamine-related gene polymorphism, DRD2/ANKK1 (N = 24), on FRN during an associative reward-learning paradigm. FRN was most negative following unpredicted non-reward and least-negative following unpredicted reward. A difference wave contrasting these conditions was significantly more pronounced for extraverted participants than for introverts, with a similar but non-significant trend for participants carrying at least one copy of the A1 allele of the DRD2/ANKK1 gene compared with those without the allele. Extraversion was also significantly higher in A1 allele carriers. Results have broad relevance to neuroscience and personality research concerning reward processing and dopamine function.

  4. Scaling of perceptual errors can predict the shape of neural tuning curves.

    PubMed

    Shouval, Harel Z; Agarwal, Animesh; Gavornik, Jeffrey P

    2013-04-19

    Weber's law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber's law remains unknown. This work presents a simple theory explaining the conditions under which Weber's law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber's law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber's law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber's law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber's law and may represent a general governing principle relating perception to neural activity.

  5. Scaling of Perceptual Errors Can Predict the Shape of Neural Tuning Curves

    PubMed Central

    Shouval, Harel Z.; Agarwal, Animesh; Gavornik, Jeffrey P.

    2014-01-01

    Weber’s law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber’s law remains unknown. This work presents a simple theory explaining the conditions under which Weber’s law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber’s law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber’s law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber’s law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber’s law and may represent a general governing principle relating perception to neural activity. PMID:23679640

  6. Prediction errors to emotional expressions: the roles of the amygdala in social referencing.

    PubMed

    Meffert, Harma; Brislin, Sarah J; White, Stuart F; Blair, James R

    2015-04-01

    Social referencing paradigms in humans and observational learning paradigms in animals suggest that emotional expressions are important for communicating valence. It has been proposed that these expressions initiate stimulus-reinforcement learning. Relatively little is known about the role of emotional expressions in reinforcement learning, particularly in the context of social referencing. In this study, we examined object valence learning in the context of a social referencing paradigm. Participants viewed objects and faces that turned toward the objects and displayed a fearful, happy or neutral reaction to them, while judging the gender of these faces. Notably, amygdala activation was larger when the expressions following an object were less expected. Moreover, when asked, participants were both more likely to want to approach, and showed stronger amygdala responses to, objects associated with happy relative to objects associated with fearful expressions. This suggests that the amygdala plays two roles in social referencing: (i) initiating learning regarding the valence of an object as a function of prediction errors to expressions displayed toward this object and (ii) orchestrating an emotional response to the object when value judgments are being made regarding this object.

  7. Scaling of Perceptual Errors Can Predict the Shape of Neural Tuning Curves

    NASA Astrophysics Data System (ADS)

    Shouval, Harel Z.; Agarwal, Animesh; Gavornik, Jeffrey P.

    2013-04-01

    Weber’s law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber’s law remains unknown. This work presents a simple theory explaining the conditions under which Weber’s law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber’s law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber’s law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber’s law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber’s law and may represent a general governing principle relating perception to neural activity.

  8. Altered neural reward and loss processing and prediction error signalling in depression

    PubMed Central

    Ubl, Bettina; Kuehner, Christine; Kirsch, Peter; Ruttorf, Michaela

    2015-01-01

    Dysfunctional processing of reward and punishment may play an important role in depression. However, functional magnetic resonance imaging (fMRI) studies have shown heterogeneous results for reward processing in fronto-striatal regions. We examined neural responsivity associated with the processing of reward and loss during anticipation and receipt of incentives and related prediction error (PE) signalling in depressed individuals. Thirty medication-free depressed persons and 28 healthy controls performed an fMRI reward paradigm. Regions of interest analyses focused on neural responses during anticipation and receipt of gains and losses and related PE-signals. Additionally, we assessed the relationship between neural responsivity during gain/loss processing and hedonic capacity. When compared with healthy controls, depressed individuals showed reduced fronto-striatal activity during anticipation of gains and losses. The groups did not significantly differ in response to reward and loss outcomes. In depressed individuals, activity increases in the orbitofrontal cortex and nucleus accumbens during reward anticipation were associated with hedonic capacity. Depressed individuals showed an absence of reward-related PEs but encoded loss-related PEs in the ventral striatum. Depression seems to be linked to blunted responsivity in fronto-striatal regions associated with limited motivational responses for rewards and losses. Alterations in PE encoding might mirror blunted reward- and enhanced loss-related associative learning in depression. PMID:25567763

  9. Human Dorsal Striatum Encodes Prediction Errors during Observational Learning of Instrumental Actions

    PubMed Central

    Cooper, Jeffrey C.; Dunne, Simon; Furey, Teresa; O’Doherty, John P.

    2013-01-01

    The dorsal striatum plays a key role in the learning and expression of instrumental reward associations that are acquired through direct experience. However, not all learning about instrumental actions require direct experience. Instead, humans and other animals are also capable of acquiring instrumental actions by observing the experiences of others. In this study, we investigated the extent to which human dorsal striatum is involved in observational as well as experiential instrumental reward learning. Human participants were scanned with fMRI while they observed a confederate over a live video performing an instrumental conditioning task to obtain liquid juice rewards. Participants also performed a similar instrumental task for their own rewards. Using a computational model-based analysis, we found reward prediction errors in the dorsal striatum not only during the experiential learning condition but also during observational learning. These results suggest a key role for the dorsal striatum in learning instrumental associations, even when those associations are acquired purely by observing others. PMID:21812568

  10. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  11. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  12. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    NASA Astrophysics Data System (ADS)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  13. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    NASA Astrophysics Data System (ADS)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  14. Computer program to minimize prediction error in models from experiments with 16 hypercube points and 0 to 6 center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1982-01-01

    A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.

  15. Preschool Speech Error Patterns Predict Articulation and Phonological Awareness Outcomes in Children with Histories of Speech Sound Disorders

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2013-01-01

    Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…

  16. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

  17. Online Visual Feedback during Error-Free Channel Trials Leads to Active Unlearning of Movement Dynamics: Evidence for Adaptation to Trajectory Prediction Errors

    PubMed Central

    Lago-Rodriguez, Angel; Miall, R. Chris

    2016-01-01

    Prolonged exposure to movement perturbations leads to creation of motor memories which decay towards previous states when the perturbations are removed. However, it remains unclear whether this decay is due only to a spontaneous and passive recovery of the previous state. It has recently been reported that activation of reinforcement-based learning mechanisms delays the onset of the decay. This raises the question whether other motor learning mechanisms may also contribute to the retention and/or decay of the motor memory. Therefore, we aimed to test whether mechanisms of error-based motor adaptation are active during the decay of the motor memory. Forty-five right-handed participants performed point-to-point reaching movements under an external dynamic perturbation. We measured the expression of the motor memory through error-clamped (EC) trials, in which lateral forces constrained movements to a straight line towards the target. We found greater and faster decay of the motor memory for participants who had access to full online visual feedback during these EC trials (Cursor group), when compared with participants who had no EC feedback regarding movement trajectory (Arc group). Importantly, we did not find between-group differences in adaptation to the external perturbation. In addition, we found greater decay of the motor memory when we artificially increased feedback errors through the manipulation of visual feedback (Augmented-Error group). Our results then support the notion of an active decay of the motor memory, suggesting that adaptive mechanisms are involved in correcting for the mismatch between predicted movement trajectories and actual sensory feedback, which leads to greater and faster decay of the motor memory. PMID:27721748

  18. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    SciTech Connect

    Grimm, Lars J. Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-03-15

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  19. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  20. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  1. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  2. Prediction of absolute risk of fragility fracture at 10 years in a Spanish population: validation of the WHO FRAX ™ tool in Spain

    PubMed Central

    2011-01-01

    Background Age-related bone loss is asymptomatic, and the morbidity of osteoporosis is secondary to the fractures that occur. Common sites of fracture include the spine, hip, forearm and proximal humerus. Fractures at the hip incur the greatest morbidity and mortality and give rise to the highest direct costs for health services. Their incidence increases exponentially with age. Independently changes in population demography, the age - and sex- specific incidence of osteoporotic fractures appears to be increasing in developing and developed countries. This could mean more than double the expected burden of osteoporotic fractures in the next 50 years. Methods/Design To assess the predictive power of the WHO FRAX™ tool to identify the subjects with the highest absolute risk of fragility fracture at 10 years in a Spanish population, a predictive validation study of the tool will be carried out. For this purpose, the participants recruited by 1999 will be assessed. These were referred to scan-DXA Department from primary healthcare centres, non hospital and hospital consultations. Study population: Patients attended in the national health services integrated into a FRIDEX cohort with at least one Dual-energy X-ray absorptiometry (DXA) measurement and one extensive questionnaire related to fracture risk factors. Measurements: At baseline bone mineral density measurement using DXA, clinical fracture risk factors questionnaire, dietary calcium intake assessment, history of previous fractures, and related drugs. Follow up by telephone interview to know fragility fractures in the 10 years with verification in electronic medical records and also to know the number of falls in the last year. The absolute risk of fracture will be estimated using the FRAX™ tool from the official web site. Discussion Since more than 10 years ago numerous publications have recognised the importance of other risk factors for new osteoporotic fractures in addition to low BMD. The extension of a

  3. Design of a predictive targeting error simulator for MRI-guided prostate biopsy.

    PubMed

    Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor

    2010-02-23

    Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.

  4. Design of a predictive targeting error simulator for MRI-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor

    2010-02-01

    Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.

  5. Exploring the Fundamental Dynamics of Error-Based Motor Learning Using a Stationary Predictive-Saccade Task

    PubMed Central

    Wong, Aaron L.; Shelhamer, Mark

    2011-01-01

    The maintenance of movement accuracy uses prior performance errors to correct future motor plans; this motor-learning process ensures that movements remain quick and accurate. The control of predictive saccades, in which anticipatory movements are made to future targets before visual stimulus information becomes available, serves as an ideal paradigm to analyze how the motor system utilizes prior errors to drive movements to a desired goal. Predictive saccades constitute a stationary process (the mean and to a rough approximation the variability of the data do not vary over time, unlike a typical motor adaptation paradigm). This enables us to study inter-trial correlations, both on a trial-by-trial basis and across long blocks of trials. Saccade errors are found to be corrected on a trial-by-trial basis in a direction-specific manner (the next saccade made in the same direction will reflect a correction for errors made on the current saccade). Additionally, there is evidence for a second, modulating process that exhibits long memory. That is, performance information, as measured via inter-trial correlations, is strongly retained across a large number of saccades (about 100 trials). Together, this evidence indicates that the dynamics of motor learning exhibit complexities that must be carefully considered, as they cannot be fully described with current state-space (ARMA) modeling efforts. PMID:21966462

  6. States versus Rewards: Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning

    PubMed Central

    Gläscher, Jan; Daw, Nathaniel; Dayan, Peter; O’Doherty, John P.

    2010-01-01

    Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior. PMID:20510862

  7. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  8. Direct absolute pKa predictions and proton transfer mechanisms of small molecules in aqueous solution by QM/MM-MD.

    PubMed

    Uddin, Nizam; Choi, Tae Hoon; Choi, Cheol Ho

    2013-05-23

    The pKa values of HF, HCOOH, CH3COOH, CH3CH2COOH, H2CO3, HOCl, NH4(+), CH3NH3(+), H2O2, and CH3CH2OH in aqueous solution were predicted by QM/MM-MD in combination with umbrella samplings adopting the flexible asymmetric coordinate (FAC). This unique combination yielded remarkably accurate values with the maximum and root-mean-square errors of 0.45 and 0.22 in pKa units, respectively, without any numerical or experimental adjustments. The stability of the initially formed Coulomb pair rather than the proton transfer stage turned out to be the rate-determining step, implying that the stabilizations of the created ions require a large free energy increase. A remarkable correlation between DWR (degree of water rearrangements) and pKa was observed. As such, the large pKa of ethanol can be, in part, attributed to the large water rearrangement, strongly suggesting that proper samplings of water dynamics at dissociated regions are critical for accurate predictions of pKa. Current results exhibit a promising protocol for direct and accurate predictions of pKa. The significant variations in the gas phase deprotonation energies with level of theory appear to be mostly canceled by the similar changes in the averaged solute-solvent interactions, yielding accurate results.

  9. A neural reward prediction error revealed by a meta-analysis of ERPs using great grand averages.

    PubMed

    Sambrook, Thomas D; Goslin, Jeremy

    2015-01-01

    Economic approaches to decision making assume that people attach values to prospective goods and act to maximize their obtained value. Neuroeconomics strives to observe these values directly in the brain. A widely used valuation term in formal learning and decision-making models is the reward prediction error: the value of an outcome relative to its expected value. An influential theory (Holroyd & Coles, 2002) claims that an electrophysiological component, feedback related negativity (FRN), codes a reward prediction error in the human brain. Such a component should be sensitive to both the prior likelihood of reward and its magnitude on receipt. A number of studies have found the FRN to be insensitive to reward magnitude, thus questioning the Holroyd and Coles account. However, because of marked inconsistencies in how the FRN is measured, a meaningful synthesis of this evidence is highly problematic. We conducted a meta-analysis of the FRN's response to both reward magnitude and likelihood using a novel method in which published effect sizes were disregarded in favor of direct measurement of the published waveforms themselves, with these waveforms then averaged to produce "great grand averages." Under this standardized measure, the meta-analysis revealed strong effects of magnitude and likelihood on the FRN, consistent with it encoding a reward prediction error. In addition, it revealed strong main effects of reward magnitude and likelihood across much of the waveform, indicating sensitivity to unsigned prediction errors or "salience." The great grand average technique is proposed as a general method for meta-analysis of event-related potential (ERP).

  10. Methodology to predict long-term cancer survival from short-term data using Tobacco Cancer Risk and Absolute Cancer Cure models

    NASA Astrophysics Data System (ADS)

    Mould, R. F.; Lederman, M.; Tai, P.; Wong, J. K. M.

    2002-11-01

    Three parametric statistical models have been fully validated for cancer of the larynx for the prediction of long-term 15, 20 and 25 year cancer-specific survival fractions when short-term follow-up data was available for just 1-2 years after the end of treatment of the last patient. In all groups of cases the treatment period was only 5 years. Three disease stage groups were studied, T1N0, T2N0 and T3N0. The models are the Standard Lognormal (SLN) first proposed by Boag (1949 J. R. Stat. Soc. Series B 11 15-53) but only ever fully validated for cancer of the cervix, Mould and Boag (1975 Br. J. Cancer 32 529-50), and two new models which have been termed Tobacco Cancer Risk (TCR) and Absolute Cancer Cure (ACC). In each, the frequency distribution of survival times of defined groups of cancer deaths is lognormally distributed: larynx only (SLN), larynx and lung (TCR) and all cancers (ACC). All models each have three unknown parameters but it was possible to estimate a value for the lognormal parameter S a priori. By reduction to two unknown parameters the model stability has been improved. The material used to validate the methodology consisted of case histories of 965 patients, all treated during the period 1944-1968 by Dr Manuel Lederman of the Royal Marsden Hospital, London, with follow-up to 1988. This provided a follow-up range of 20- 44 years and enabled predicted long-term survival fractions to be compared with the actual survival fractions, calculated by the Kaplan and Meier (1958 J. Am. Stat. Assoc. 53 457-82) method. The TCR and ACC models are better than the SLN model and for a maximum short-term follow-up of 6 years, the 20 and 25 year survival fractions could be predicted. Therefore the numbers of follow-up years saved are respectively 14 years and 19 years. Clinical trial results using the TCR and ACC models can thus be analysed much earlier than currently possible. Absolute cure from cancer was also studied, using not only the prediction models which

  11. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    NASA Astrophysics Data System (ADS)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  12. Disrupted Expected Value and Prediction Error Signaling in Youths With Disruptive Behavior Disorders During a Passive Avoidance Task

    PubMed Central

    White, Stuart F.; Pope, Kayla; Sinclair, Stephen; Fowler, Katherine A.; Brislin, Sarah J.; Williams, W. Craig; Pine, Daniel S.; Blair, R. James R.

    2014-01-01

    Objective Youths with disruptive behavior disorders, including conduct disorder and oppositional defiant disorder, show major impairments in reinforcement-based decision making. However, the neural basis of these difficulties remains poorly understood. This partly reflects previous failures to differentiate responses during decision making and feedback processing and to take advantage of computational model-based functional MRI (fMRI). Method Participants were 38 community youths ages 10–18 (20 had disruptive behavior disorders, and 18 were healthy comparison youths). Model-based fMRI was used to assess the computational processes involved in decision making and feedback processing in the ventromedial prefrontal cortex, insula, and caudate. Results Youths with disruptive behavior disorders showed reduced use of expected value information within the ventromedial prefrontal cortex when choosing to respond and within the anterior insula when choosing not to respond. In addition, they showed reduced responsiveness to positive prediction errors and increased responsiveness to negative prediction errors within the caudate during feedback. Conclusions This study is the first to determine impairments in the use of expected value within the ventromedial prefrontal cortex and insula during choice and in prediction error-signaling within the caudate during feedback in youths with disruptive behavior disorders. PMID:23450288

  13. Absolute-structure reports.

    PubMed

    Flack, Howard D

    2013-08-01

    All the 139 noncentrosymmetric crystal structures published in Acta Crystallographica Section C between January 2011 and November 2012 inclusive have been used as the basis of a detailed study of the reporting of absolute structure. These structure determinations cover a wide range of space groups, chemical composition and resonant-scattering contribution. Defining A and D as the average and difference of the intensities of Friedel opposites, their level of fit has been examined using 2AD and selected-D plots. It was found, regardless of the expected resonant-scattering contribution to Friedel opposites, that the Friedel-difference intensities are often dominated by random uncertainty and systematic error. An analysis of data collection strategy is provided. It is found that crystal-structure determinations resulting in a Flack parameter close to 0.5 may not necessarily be from crystals twinned by inversion. Friedifstat is shown to be a robust estimator of the resonant-scattering contribution to Friedel opposites, very little affected by the particular space group of a structure nor by the occupation of special positions. There is considerable confusion in the text of papers presenting achiral noncentrosymmetric crystal structures. Recommendations are provided for the optimal way of treating noncentrosymmetric crystal structures for which the experimenter has no interest in determining the absolute structure.

  14. Mitigation of Atmospheric Delay in SAR Absolute Ranging Using Global Numerical Weather Prediction Data: Corner Reflector Experiments at 3 Different Test Sites

    NASA Astrophysics Data System (ADS)

    Cong, Xiaoying; Balss, Ulrich; Eineder, Michael

    2015-04-01

    The atmospheric delay due to vertical stratification, the so-called stratified atmospheric delay, has a great impact on both interferometric and absolute range measurements. In our current researches [1][2][3], centimeter-range accuracy has been proven based on Corner Reflector (CR) based measurements by applying atmospheric delay correction using the Zenith Path Delay (ZPD) corrections derived from nearby Global Positioning System (GPS) stations. For a global usage, an effective method has been introduced to estimate the stratified delay based on global 4-dimensional Numerical Weather Prediction (NWP) products: the direct integration method [4][5]. Two products, ERA-Interim and operational data, provided by European Centre for Medium-Range Weather Forecast (ECMWF) are used to integrate the stratified delay. In order to access the integration accuracy, a validation approach is investigated based on ZPD derived from six permanent GPS stations located in different meteorological conditions. Range accuracy at centimeter level is demonstrated using both ECMWF products. Further experiments have been carried out in order to determine the best interpolation method by analyzing the temporal and spatial correlation of atmospheric delay using both ECMWF and GPS ZPD. Finally, the integrated atmospheric delays in slant direction (Slant Path Delay, SPD) have been applied instead of the GPS ZPD for CR experiments at three different test sites with more than 200 TerraSAR-X High Resolution SpotLight (HRSL) images. The delay accuracy is around 1-3 cm depending on the location of test site due to the local water vapor variation and the acquisition time/date. [1] Eineder M., Minet C., Steigenberger P., et al. Imaging geodesy - Toward centimeter-level ranging accuracy with TerraSAR-X. Geoscience and Remote Sensing, IEEE Transactions on, 2011, 49(2): 661-671. [2] Balss U., Gisinger C., Cong X. Y., et al. Precise Measurements on the Absolute Localization Accuracy of TerraSAR-X on the

  15. Individual Differences in Working Memory Capacity Predict Action Monitoring and the Error-Related Negativity

    ERIC Educational Resources Information Center

    Miller, A. Eve; Watson, Jason M.; Strayer, David L.

    2012-01-01

    Neuroscience suggests that the anterior cingulate cortex (ACC) is responsible for conflict monitoring and the detection of errors in cognitive tasks, thereby contributing to the implementation of attentional control. Though individual differences in frontally mediated goal maintenance have clearly been shown to influence outward behavior in…

  16. Prior knowledge is more predictive of error correction than subjective confidence.

    PubMed

    Sitzman, Danielle M; Rhodes, Matthew G; Tauber, Sarah K

    2014-01-01

    Previous research has demonstrated that, when given feedback, participants are more likely to correct confidently-held errors, as compared with errors held with lower levels of confidence, a finding termed the hypercorrection effect. Accounts of hypercorrection suggest that confidence modifies attention to feedback; alternatively, hypercorrection may reflect prior domain knowledge, with confidence ratings simply correlated with this prior knowledge. In the present experiments, we attempted to adjudicate among these explanations of the hypercorrection effect. In Experiments 1a and 1b, participants answered general knowledge questions, rated their confidence, and received feedback either immediately after rating their confidence or after a delay of several minutes. Although memory for confidence judgments should have been poorer at a delay, the hypercorrection effect was equivalent for both feedback timings. Experiment 2 showed that hypercorrection remained unchanged even when the delay to feedback was increased. In addition, measures of recall for prior confidence judgments showed that memory for confidence was indeed poorer after a delay. Experiment 3 directly compared estimates of domain knowledge with confidence ratings, showing that such prior knowledge was related to error correction, whereas the unique role of confidence was small. Overall, our results suggest that prior knowledge likely plays a primary role in error correction, while confidence may play a small role or merely serve as a proxy for prior knowledge.

  17. Do Cognitive Patterns of Strengths and Weaknesses Differentially Predict Errors on Reading, Writing, and Spelling?

    ERIC Educational Resources Information Center

    Liu, Xiaochen; Marchis, Lavinia; DeBiase, Emily; Breaux, Kristina C.; Courville, Troy; Pan, Xingyu; Hatcher, Ryan C.; Koriakin, Taylor; Choi, Dowon; Kaufman, Alan S.

    2017-01-01

    This study investigated the relationship between specific cognitive patterns of strengths and weaknesses (PSWs) and the errors children make in reading, writing, and spelling tests from the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants were selected from the KTEA-3 standardization sample based on five cognitive…

  18. Predicting First Grade Achievement from Form Errors in Printing at the Start of Pre-Kindergarten.

    ERIC Educational Resources Information Center

    Simner, Marvin L.

    A 3-year longitudinal investigation indicated that form errors in printing that children make can aid in the identification of at-risk or failure-prone pupils as early as the start of prekindergarten. Two samples were selected, one consisting of 104 and the other of 63 prekindergarten children. Mean age of the samples was 52 months. Item analysis…

  19. Sensorimotor feedback based on task-relevant error robustly predicts temporal recruitment and multidirectional tuning of muscle synergies

    PubMed Central

    Safavynia, Seyed A.

    2013-01-01

    We hypothesized that motor outputs are hierarchically organized such that descending temporal commands based on desired task-level goals flexibly recruit muscle synergies that specify the spatial patterns of muscle coordination that allow the task to be achieved. According to this hypothesis, it should be possible to predict the patterns of muscle synergy recruitment based on task-level goals. We demonstrated that the temporal recruitment of muscle synergies during standing balance control was robustly predicted across multiple perturbation directions based on delayed sensorimotor feedback of center of mass (CoM) kinematics (displacement, velocity, and acceleration). The modulation of a muscle synergy's recruitment amplitude across perturbation directions was predicted by the projection of CoM kinematic variables along the preferred tuning direction(s), generating cosine tuning functions. Moreover, these findings were robust in biphasic perturbations that initially imposed a perturbation in the sagittal plane and then, before sagittal balance was recovered, perturbed the body in multiple directions. Therefore, biphasic perturbations caused the initial state of the CoM to differ from the desired state, and muscle synergy recruitment was predicted based on the error between the actual and desired upright state of the CoM. These results demonstrate that that temporal motor commands to muscle synergies reflect task-relevant error as opposed to sensory inflow. The proposed hierarchical framework may represent a common principle of motor control across motor tasks and levels of the nervous system, allowing motor intentions to be transformed into motor actions. PMID:23100133

  20. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2013-09-30

    Structural Instability and Model Error Andrew J. Majda New York University Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY...Majda and his DRI post doc Sapsis have achieved a potential major breakthrough with a new class of methods for UQ. Turbulent dynamical systems are...uncertain initial data. These key physical quantities are often characterized by the degrees of freedom which carry the largest energy or variance and

  1. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2012-09-30

    Instability and Model Error Principal Investigator: Andrew J. Majda Institution: New York University Courant Institute of Mathematical ...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) New York University, Courant Institute of Mathematical ...for the Special Volume of Communications on Pure and Applied Mathematics for 75th Anniversary of the Courant Institute, April 12, 2012, doi: 10.1002

  2. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  3. Prediction Errors in Learning Drug Response from Gene Expression Data – Influence of Labeling, Sample Size, and Machine Learning Algorithm

    PubMed Central

    Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian

    2013-01-01

    Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment. PMID:23894636

  4. Prediction errors in learning drug response from gene expression data - influence of labeling, sample size, and machine learning algorithm.

    PubMed

    Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian

    2013-01-01

    Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment.

  5. Space debris orbit prediction errors using bi-static laser observations. Case study: ENVISAT

    NASA Astrophysics Data System (ADS)

    Wirnsberger, Harald; Baur, Oliver; Kirchner, Georg

    2015-06-01

    Large and massive space debris objects - such as abandoned satellites and upper stages - in the Low Earth Orbit (LEO) segment pose an increasing threat to all space faring nations. For collision avoidance measures or the removal of these objects, the quality of orbit predictions is one of the most relevant issues. Laser ranging has the potential to significantly contribute to the reliability and accuracy of orbit predictions. The benefit of "conventional" two-way laser ranges for this purpose has recently been demonstrated. For the first time, in this contribution we focus on bi-static laser observations - a new observation type for orbit determination and prediction. Our investigations deal with orbit predictions of the defunct ENVISAT satellite. In order to compensate for the sparseness of "conventional" tracking data, we found that the concept of bi-static laser observations improves the prediction accuracy by one order of magnitude compared to the results based on two-way laser ranges only.

  6. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    PubMed

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts.

  7. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

    NASA Astrophysics Data System (ADS)

    Christiansen, Bo

    2015-04-01

    Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

  8. Intelligibility Performance of Narrowband Linear Predictive Vocoders in the Presence of Bit Errors

    DTIC Science & Technology

    1977-11-01

    BPS 61 APPENDIX C. INTELLIGIBILITY DATA FOR SUSTENTION FEATURE C.l. DRT test words for sustention 62 C.2. Data table: Sustention intelligibility...scores for LPC and PLPC processors 63 C.3. Analysis of variance summaries: C.3.1. Sustention (Total) 66 C.3.2. Sustention (voiced) 67 C.3.3... Sustention (unvoiced) 68 C.4. Cumulative distributions: DRT scores for sustention C.4.1. LPC-10 at 2400 BPS with bit errors 69 C.4.2. PLPC at

  9. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  10. Temporal and spatial localization of prediction-error signals in the visual brain.

    PubMed

    Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W

    2017-02-28

    It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions.

  11. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  12. Information theory, model error, and predictive skill of stochastic models for complex nonlinear systems

    NASA Astrophysics Data System (ADS)

    Giannakis, Dimitrios; Majda, Andrew J.; Horenko, Illia

    2012-10-01

    Many problems in complex dynamical systems involve metastable regimes despite nearly Gaussian statistics with underlying dynamics that is very different from the more familiar flows of molecular dynamics. There is significant theoretical and applied interest in developing systematic coarse-grained descriptions of the dynamics, as well as assessing their skill for both short- and long-range prediction. Clustering algorithms, combined with finite-state processes for the regime transitions, are a natural way to build such models objectively from data generated by either the true model or an imperfect model. The main theme of this paper is the development of new practical criteria to assess the predictability of regimes and the predictive skill of such coarse-grained approximations through empirical information theory in stationary and periodically-forced environments. These criteria are tested on instructive idealized stochastic models utilizing K-means clustering in conjunction with running-average smoothing of the training and initial data for forecasts. A perspective on these clustering algorithms is explored here with independent interest, where improvement in the information content of finite-state partitions of phase space is a natural outcome of low-pass filtering through running averages. In applications with time-periodic equilibrium statistics, recently developed finite-element, bounded-variation algorithms for nonstationary autoregressive models are shown to substantially improve predictive skill beyond standard autoregressive models.

  13. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  14. Different dimensions of the prediction error as a decisive factor for the triggering of the reconsolidation process.

    PubMed

    Agustina López, M; Jimena Santos, M; Cortasa, Santiago; Fernández, Rodrigo S; Carbó Tano, Martin; Pedreira, María E

    2016-12-01

    The reconsolidation process is the mechanism by which strength and/or content of consolidated memories are updated. Prediction error (PE) is the difference between the prediction made and current events. It is proposed as a necessary condition to trigger the reconsolidation process. Here we analyzed deeply the role of the PE in the associative memory reconsolidation in the crab Neohelice granulata. An incongruence between the learned temporal relationship between conditioned and unconditioned stimuli (CS-US) was enough to trigger the reconsolidation process. Moreover, after a partial reinforced training, a PE of 50% opened the possibility to labilize the consolidated memory with a reminder which included or not the US. Further, during an extinction training a small PE in the first interval between CSs was enough to trigger reconsolidation. Overall, we highlighted the relation between training history and different reactivation possibilities to recruit the process responsible of memory updating.

  15. Surprise signals in anterior cingulate cortex: Neuronal encoding of unsigned reward prediction errors driving adjustment in behavior

    PubMed Central

    Hayden, Benjamin Y.; Heilbronner, Sarah R.; Pearson, John M.; Platt, Michael L.

    2011-01-01

    In attentional models of learning, associations between actions and subsequent rewards are stronger when outcomes are surprising, regardless of their valence. Despite the behavioral evidence that surprising outcomes drive learning, neural correlates of unsigned reward prediction errors remain elusive. Here we show that in a probabilistic choice task, trial-to-trial variations in preference track outcome surprisingness. Concordant with this behavioral pattern, responses of neurons in macaque (Macaca mulatta) dorsal anterior cingulate cortex (dACC) to both large and small rewards were enhanced when the outcome was surprising. Moreover, when, on some trials, probabilities were hidden, neuronal responses to rewards were reduced, consistent with the idea that the absence of clear expectations diminishes surprise. These patterns are inconsistent with the idea that dACC neurons track signed errors in reward prediction, as dopamine neurons do. Our results also indicate that dACC neurons do not signal conflict. In the context of other studies of dACC function, these results suggest a link between reward-related modulations in dACC activity and attention and motor control processes involved in behavioral adjustment. More speculatively, these data point to a harmonious integration between reward and learning accounts of ACC function on one hand, and attention and cognitive control accounts on the other. PMID:21411658

  16. Cognitive flexibility in adolescence: neural and behavioral mechanisms of reward prediction error processing in adaptive decision making during development.

    PubMed

    Hauser, Tobias U; Iannaccone, Reto; Walitza, Susanne; Brandeis, Daniel; Brem, Silvia

    2015-01-01

    Adolescence is associated with quickly changing environmental demands which require excellent adaptive skills and high cognitive flexibility. Feedback-guided adaptive learning and cognitive flexibility are driven by reward prediction error (RPE) signals, which indicate the accuracy of expectations and can be estimated using computational models. Despite the importance of cognitive flexibility during adolescence, only little is known about how RPE processing in cognitive flexibility deviates between adolescence and adulthood. In this study, we investigated the developmental aspects of cognitive flexibility by means of computational models and functional magnetic resonance imaging (fMRI). We compared the neural and behavioral correlates of cognitive flexibility in healthy adolescents (12-16years) to adults performing a probabilistic reversal learning task. Using a modified risk-sensitive reinforcement learning model, we found that adolescents learned faster from negative RPEs than adults. The fMRI analysis revealed that within the RPE network, the adolescents had a significantly altered RPE-response in the anterior insula. This effect seemed to be mainly driven by increased responses to negative prediction errors. In summary, our findings indicate that decision making in adolescence goes beyond merely increased reward-seeking behavior and provides a developmental perspective to the behavioral and neural mechanisms underlying cognitive flexibility in the context of reinforcement learning.

  17. Nonlinear Scale Interaction: A possible mechanism of up-scale error transport attributing to the inadequate predictability of Intra-seasonal Oscillations

    NASA Astrophysics Data System (ADS)

    De, Saumyendu; Sahai, Atul Kumar; Nath Goswami, Bhupendra

    2013-04-01

    One of the fundamental science questions raised by the Year of Tropical Convection (YOTC) group was that under what circumstances and via what mechanisms water vapor, energy and momentum were transferred across scales ranging from meso-scale to the large (or planetary scale) (The YOTC Science Plan, 2008)? This study has partially addressed the above broad science question by exploring a probable mechanism of error energy transfer across scales in relation to the predictability studies of Intra-seasonal oscillations (ISOs). The predictability of ISOs being in the dominant planetary scales of wavenumbers 1 - 4 is restricted by the rapid growth and the large accumulation of errors in these planetary / ultra-long waves in almost all medium range forecast models (Baumhefner et al.1978, Krishnamurti et al. 1990). Understanding the rapid growth and enormous build-up of error is, therefore, imperative for improving the forecast of ISOs. It is revealed that while the initial errors are largely on the small scales, maximum errors are appeared in the ultra-long waves (around the tropical convergence zone) within 3-5 days of forecasts. The wavenumber distribution of error with the forecast lead time shows that the initial error in the small scales has already attained its saturation value at these scales within 6-hr forecast lead, whereas that in ultra-long scales is about two order of magnitude smaller than their saturation value. This much amount of error increase in planetary waves cannot be explained simply as a growth of the initial error unless it has been transported from smaller scales. Hence, it has been proposed that the fast growth of errors in the planetary waves is due to continuous generation of errors in the small scales attributing to the inadequacy in representing different physical processes such as formulation of cumulus clouds in the model and upscale propagation of these errors through the process of scale interactions. Basic systematic error kinetic

  18. The disparity mutagenesis model predicts rescue of living things from catastrophic errors

    PubMed Central

    Furusawa, Mitsuru

    2014-01-01

    In animals including humans, mutation rates per generation exceed a perceived threshold, and excess mutations increase genetic load. Despite this, animals have survived without extinction. This is a perplexing problem for animal and human genetics, arising at the end of the last century, and to date still does not have a fully satisfactory explanation. Shortly after we proposed the disparity theory of evolution in 1992, the disparity mutagenesis model was proposed, which forms the basis for an explanation for an acceleration of evolution and species survival. This model predicts a significant increase of the mutation threshold values if the fidelity difference in replication between the lagging and leading strands is high enough. When applied to biological evolution, the model predicts that living things, including humans, might overcome the lethal effect of accumulated deleterious mutations and be able to survive. Artificially derived mutator strains of microorganisms, in which an enhanced lagging-strand-biased mutagenesis was introduced, showed unexpectedly high adaptability to severe environments. The implications of the striking behaviors shown by these disparity mutators will be discussed in relation to how living things with high mutation rates can avoid the self-defeating risk of excess mutations. PMID:25538731

  19. The information value of early career productivity in mathematics: a ROC analysis of prediction errors in bibliometricly informed decision making.

    PubMed

    Lindahl, Jonas; Danell, Rickard

    2016-01-01

    The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty

  20. A Physiologically Based Pharmacokinetic Model to Predict the Pharmacokinetics of Highly Protein-Bound Drugs and Impact of Errors in Plasma Protein Binding

    PubMed Central

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2015-01-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057

  1. A physiologically based pharmacokinetic model to predict the pharmacokinetics of highly protein-bound drugs and the impact of errors in plasma protein binding.

    PubMed

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2016-04-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd.

  2. A state-space approach to predict stream temperatures and quantify model error: Application on the Sacramento River, California

    NASA Astrophysics Data System (ADS)

    Pike, A.; Danner, E.; Lindley, S.; Melton, F. S.; Nemani, R. R.; Hashimoto, H.

    2010-12-01

    In the Central Valley of California, river water temperature is a critical indicator of habitat quality for endangered salmonid species and affects re-licensing of major water projects and dam operations worth billions of dollars. There is consequently strong interest in modeling water temperature dynamics in such regulated rivers. However, the accuracy of current stream temperature models is limited by the lack of spatially detailed meteorological forecasts, and few models quantify error due to uncertainty in model inputs. To address these issues, we developed a high-resolution deterministic 1-dimensional stream temperature model (sub-hourly time step, sub-kilometer spatial resolution) in a state-space framework, and applied this model to Upper Sacramento River. The model uses a physically-based heat budgets to calculate the rate of heat transfer to/from the river. We consider heat transfer at the air-water interface using atmospheric variables provided by the TOPS-WRF (Terrestrial Observation and Prediction System - Weather Research and Forecasting) model—a high-resolution assimilation of satellite-derived meteorological observations and numerical weather simulations—as inputs. The TOPS-WRF framework allows us to improve the spatial and temporal resolution of stream temperature predictions. The hydrodynamics of the river (flow velocity and channel geometry) are characterized using densely-spaced channel cross-sections and flow data. Water temperatures are calculated by considering the hydrologic and thermal characteristics of the river and solving the advection-diffusion equation for heat transport in a mixed Eulerian-Lagrangian framework. We recast the advection-diffusion equation into a a state-space formulation, which linearizes the highly non-linear numerical system for rapid calculation using finite-difference techniques. We then implement a Kalman filter to assimilate measurement data from a series of five temperature gages in our study region. This

  3. Absolutely classical spin states

    NASA Astrophysics Data System (ADS)

    Bohnet-Waldraff, F.; Giraud, O.; Braun, D.

    2017-01-01

    We introduce the concept of "absolutely classical" spin states, in analogy to absolutely separable states of bipartite quantum systems. Absolutely classical states are states that remain classical (i.e., a convex sum of projectors on coherent states of a spin j ) under any unitary transformation applied to them. We investigate the maximal size of the ball of absolutely classical states centered on the maximally mixed state and derive a lower bound for its radius as a function of the total spin quantum number. We also obtain a numerical estimate of this maximal radius and compare it to the case of absolutely separable states.

  4. Formulaton of a general technique for predicting pneumatic attenuation errors in airborne pressure sensing devices

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.

    1988-01-01

    Presented is a mathematical model, derived from the Navier-Stokes equations of momentum and continuity, which may be accurately used to predict the behavior of conventionally mounted pneumatic sensing systems subject to arbitrary pressure inputs. Numerical techniques for solving the general model are developed. Both step and frequency response lab tests were performed. These data are compared against solutions of the mathematical model. The comparisons show excellent agreement. The procedures used to obtain the lab data are described. In-flight step and frequency response data were obtained. Comparisons with numerical solutions of the mathematical model show good agreement. Procedures used to obtain the flight data are described. Difficulties encountered with obtaining the flight data are discussed.

  5. Formulation of a General Technique for Predicting Pneumatic Attenuation Errors in Airborne Pressure Sensing Devices

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.

    1988-01-01

    Presented is a mathematical model derived from the Navier-Stokes equations of momentum and continuity, which may be accurately used to predict the behavior of conventionally mounted pneumatic sensing systems subject to arbitrary pressure inputs. Numerical techniques for solving the general model are developed. Both step and frequency response lab tests were performed. These data are compared with solutions of the mathematical model and show excellent agreement. The procedures used to obtain the lab data are described. In-flight step and frequency response data were obtained. Comparisons with numerical solutions of the math model show good agreement. Procedures used to obtain the flight data are described. Difficulties encountered with obtaining the flight data are discussed.

  6. Absolute Radiometric Calibration of EUNIS-06

    NASA Technical Reports Server (NTRS)

    Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.

    2007-01-01

    The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.

  7. Season-dependent dynamics of nonlinear optimal error growth and El Niño-Southern Oscillation predictability in a theoretical model

    NASA Astrophysics Data System (ADS)

    Mu, Mu; Duan, Wansuo; Wang, Bin

    2007-05-01

    Most state-of-the-art climate models have difficulty in the prediction of El Niño-Southern Oscillation (ENSO) starting from preboreal spring seasons. The causes of this spring predictability barrier (SPB) remain elusive. With a theoretical ENSO system model, we investigate this controversial issue by tracing the evolution of conditional nonlinear optimal perturbation (CNOP) and by analyzing the behavior of initial error growth. The CNOPs are the errors in the initial states of ENSO events, which have the biggest impact on the uncertainties at the prediction time under proper physical constraints. We show that the evolution of CNOP-type errors associated with El Niño episodes depends remarkably on season with the fastest growth occurring during boreal spring in the onset phase. There also exist other kinds of initial errors, which have either somewhat smaller growth rates or neutral ones during spring. However, for La Niña events, even if initial errors are of CNOP-type, the errors grow without significant seasonal dependence. These findings suggest that the SPB in this model results from combined effects of three factors: the annual cycle of the mean state, the structure of El Niño, and the pattern of the initial errors. On the basis of the error tendency equations derived from the model, we addressed how the combination of the three factors causes the SPB and proposed a mechanism responsible for the error growth in the model ENSO events. Our results help in clarifying the role of the initial error pattern in SPB, which may provide a clue for explaining why SPB can be eliminated by improving initial conditions. The results also illustrate a theoretical basis for improving data assimilation in ENSO prediction.

  8. From prediction error to incentive salience: mesolimbic computation of reward motivation

    PubMed Central

    Berridge, Kent C.

    2011-01-01

    Reward contains separable psychological components of learning, incentive motivation and pleasure. Most computational models have focused only on the learning component of reward, but the motivational component is equally important in reward circuitry, and even more directly controls behavior. Modeling the motivational component requires recognition of additional control factors besides learning. Here I will discuss how mesocorticolimbic mechanisms generate the motivation component of incentive salience. Incentive salience takes Pavlovian learning and memory as one input and as an equally important input takes neurobiological state factors (e.g., drug states, appetite states, satiety states) that can vary independently of learning. Neurobiological state changes can produce unlearned fluctuations or even reversals in the ability of a previously-learned reward cue to trigger motivation. Such fluctuations in cue-triggered motivation can dramatically depart from all previously learned values about the associated reward outcome. Thus a consequence of the difference between incentive salience and learning can be to decouple cue-triggered motivation of the moment from previously learned values of how good the associated reward has been in the past. Another consequence can be to produce irrationally strong motivation urges that are not justified by any memories of previous reward values (and without distorting associative predictions of future reward value). Such irrationally strong motivation may be especially problematic in addiction. To comprehend these phenomena, future models of mesocorticolimbic reward function should address the neurobiological state factors that participate to control generation of incentive salience. PMID:22487042

  9. Cloud Condensation Nuclei Prediction Error from Application of Kohler Theory: Importance for the Aerosol Indirect Effect

    NASA Technical Reports Server (NTRS)

    Sotiropoulou, Rafaella-Eleni P.; Nenes, Athanasios; Adams, Peter J.; Seinfeld, John H.

    2007-01-01

    In situ observations of aerosol and cloud condensation nuclei (CCN) and the GISS GCM Model II' with an online aerosol simulation and explicit aerosol-cloud interactions are used to quantify the uncertainty in radiative forcing and autoconversion rate from application of Kohler theory. Simulations suggest that application of Koehler theory introduces a 10-20% uncertainty in global average indirect forcing and 2-11% uncertainty in autoconversion. Regionally, the uncertainty in indirect forcing ranges between 10-20%, and 5-50% for autoconversion. These results are insensitive to the range of updraft velocity and water vapor uptake coefficient considered. This study suggests that Koehler theory (as implemented in climate models) is not a significant source of uncertainty for aerosol indirect forcing but can be substantial for assessments of aerosol effects on the hydrological cycle in climatically sensitive regions of the globe. This implies that improvements in the representation of GCM subgrid processes and aerosol size distribution will mostly benefit indirect forcing assessments. Predictions of autoconversion, by nature, will be subject to considerable uncertainty; its reduction may require explicit representation of size-resolved aerosol composition and mixing state.

  10. Influence of physical and chemical properties of HTSXT-FTIR samples on the quality of prediction models developed to determine absolute concentrations of total proteins, carbohydrates and triglycerides: a preliminary study on the determination of their absolute concentrations in fresh microalgal biomass.

    PubMed

    Serrano León, Esteban; Coat, Rémy; Moutel, Benjamin; Pruvost, Jérémy; Legrand, Jack; Gonçalves, Olivier

    2014-11-01

    Absolute concentrations of total macromolecules (triglycerides, proteins and carbohydrates) in microorganisms can be rapidly measured by FTIR spectroscopy, but caution is needed to avoid non-specific experimental bias. Here, we assess the limits within which this approach can be used on model solutions of macromolecules of interest. We used the Bruker HTSXT-FTIR system. Our results show that the solid deposits obtained after the sampling procedure present physical and chemical properties that influence the quality of the absolute concentration prediction models (univariate and multivariate). The accuracy of the models was degraded by a factor of 2 or 3 outside the recommended concentration interval of 0.5-35 µg spot(-1). Change occurred notably in the sample hydrogen bond network, which could, however, be controlled using an internal probe (pseudohalide anion). We also demonstrate that for aqueous solutions, accurate prediction of total carbohydrate quantities (in glucose equivalent) could not be made unless a constant amount of protein was added to the model solution (BSA). The results of the prediction model for more complex solutions, here with two components: glucose and BSA, were very encouraging, suggesting that this FTIR approach could be used as a rapid quantification method for mixtures of molecules of interest, provided the limits of use of the HTSXT-FTIR method are precisely known and respected. This last finding opens the way to direct quantification of total molecules of interest in more complex matrices.

  11. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    PubMed

    Bányai, László; Patthy, László

    2016-08-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.

  12. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors

    PubMed Central

    Bányai, László; Patthy, László

    2016-01-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation. PMID:27476717

  13. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  14. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  15. Effect of model error on precipitation forecasts in the high-resolution limited area ensemble prediction system of the Korea Meteorological Administration

    NASA Astrophysics Data System (ADS)

    Kim, SeHyun; Kim, Hyun Mee

    2015-04-01

    In numerical weather prediction using convective-scale model resolution, forecast uncertainties are caused by initial condition error, boundary condition error, and model error. Because convective-scale forecasts are influenced by subgrid scale processes which cannot be resolved easily, the model error becomes more important than the initial and boundary condition errors. To consider the model error, multi-model and multi-physics methods use several models and physics schemes and the stochastic physics method uses random numbers to create a noise term in the model equations (e.g. Stochastic Perturbed Parameterization Tendency (SPPT), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Convective Vorticity (SCV), and Random Parameters (RP)). In this study, the RP method was used to consider the model error in the high-resolution limited area ensemble prediction system (EPS) of the Korea Meteorological Administration (KMA). The EPS has 12 ensemble members with 3 km horizontal resolution which generate 48 h forecasts. The initial and boundary conditions were provided by the global EPS of the KMA. The RP method was applied to microphysics and boundary layer schemes, and the ensemble forecasts using RP were compared with those without RP during July 2013. Both Root Mean Square Error (RMSE) and spread of wind at 10 m verified by surface Automatic Weather System (AWS) observations decreased when using RP. However, for 1 hour accumulated precipitation, the spread increased with RP and Equitable Threat Score (ETS) showed different results for each rainfall event.

  16. A framework for testing the use of electric and electromagnetic data to reduce the prediction error of groundwater models

    NASA Astrophysics Data System (ADS)

    Christensen, N. K.; Christensen, S.; Ferre, T. P. A.

    2015-09-01

    Despite geophysics is being used increasingly, it is still unclear how and when the integration of geophysical data improves the construction and predictive capability of groundwater models. Therefore, this paper presents a newly developed HYdrogeophysical TEst-Bench (HYTEB) which is a collection of geological, groundwater and geophysical modeling and inversion software wrapped to make a platform for generation and consideration of multi-modal data for objective hydrologic analysis. It is intentionally flexible to allow for simple or sophisticated treatments of geophysical responses, hydrologic processes, parameterization, and inversion approaches. It can also be used to discover potential errors that can be introduced through petrophysical models and approaches to correlating geophysical and hydrologic parameters. With HYTEB we study alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity. It is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by regularization. For purely hydrologic inversion (HI, only using hydrologic data) we used Tikhonov regularization combined with singular value decomposition. For joint hydrogeophysical inversion (JHI) and sequential hydrogeophysical inversion (SHI) the resistivity estimates from TEM are used together with a petrophysical relationship to formulate the regularization term. In all cases, the regularization stabilizes the inversion, but neither the HI nor the JHI objective function could be minimized uniquely. SHI or JHI with

  17. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  18. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  19. Accurate determination of pyridine-poly(amidoamine) dendrimer absolute binding constants with the OPLS-AA force field and direct integration of radial distribution functions.

    PubMed

    Peng, Yong; Kaminski, George A

    2005-08-11

    OPLS-AA force field and direct integration of intermolecular radial distribution functions (RDF) were employed to calculate absolute binding constants of pyridine molecules to amino group (NH2) and amide group hydrogen atoms in and first generation poly(amidoamine) dendrimers in chloroform. The average errors in the absolute and relative association constants, as predicted with the calculations, are 14.1% and 10.8%, respectively, which translate into ca. 0.08 and 0.06 kcal/mol errors in the absolute and relative binding free energies. We believe that this level of accuracy proves the applicability of the OPLS-AA, force field, in combination with the direct RDF integration, to reproducing and predicting absolute intermolecular association constants of low magnitudes (ca. 0.2-2.0 range).

  20. Accurate Determination of Pyridine -- Poly (Amidoamine) Dendrimer Absolute Binding Constants with the OPLS-AA Force Field and Direct Integration of Radial Distribution Functions

    NASA Astrophysics Data System (ADS)

    Peng, Yong; Kaminski, George

    2006-03-01

    OPLS-AA force field and direct integration of intermolecular radial distribution functions (RDF) were employed to calculate absolute binding constants of pyridine molecules to NH2 and amide group hydrogen atoms in 0th and 1st generation poly (amidoamine) dendrimers in chloroform. The average errors in the absolute and relative association constants, as predicted with the calculations, are 14.1% and 10.8%, respectively, which translate into ca. 0.08 kcal/mol and 0.06 kcal/mol errors in the absolute and relative binding free energies. We believe that this level of accuracy proves the applicability of the OPLS-AA, force field, in combination with the direct RDF integration, to reproducing and predicting absolute intermolecular association constants of low magnitudes (ca. 0.2 -- 2.0 range).

  1. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have

  2. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia.

    PubMed

    Doubková, Marcela; Van Dijk, Albert I J M; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-05-15

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have

  3. Keeping up with the Joneses: Interpersonal Prediction Errors and the Correlation of Behavior in a Tandem Sequential Choice Task

    PubMed Central

    Apple, Nathan; Montague, P. Read

    2013-01-01

    In many settings, copying, learning from or assigning value to group behavior is rational because such behavior can often act as a proxy for valuable returns. However, such herd behavior can also be pathologically misleading by coaxing individuals into behaviors that are otherwise irrational and it may be one source of the irrational behaviors underlying market bubbles and crashes. Using a two-person tandem investment game, we sought to examine the neural and behavioral responses of herd instincts in situations stripped of the incentive to be influenced by the choices of one's partner. We show that the investments of the two subjects correlate over time if they are made aware of their partner's choices even though these choices have no impact on either player's earnings. We computed an “interpersonal prediction error”, the difference between the investment decisions of the two subjects after each choice. BOLD responses in the striatum, implicated in valuation and action selection, were highly correlated with this interpersonal prediction error. The revelation of the partner's investment occurred after all useful information about the market had already been revealed. This effect was confirmed in two separate experiments where the impact of the time of revelation of the partner's choice was tested at 2 seconds and 6 seconds after a subject's choice; however, the effect was absent in a control condition with a computer partner. These findings strongly support the existence of mechanisms that drive correlated behavior even in contexts where there is no explicit advantage to do so. PMID:24204226

  4. Predictability of the Arctic sea ice edge

    NASA Astrophysics Data System (ADS)

    Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.

    2016-02-01

    Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.

  5. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    PubMed

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor

  6. Subjective and model-estimated reward prediction: association with the feedback-related negativity (FRN) and reward prediction error in a reinforcement learning task.

    PubMed

    Ichikawa, Naho; Siegle, Greg J; Dombrovski, Alexandre; Ohira, Hideki

    2010-12-01

    In this study, we examined whether the feedback-related negativity (FRN) is associated with both subjective and objective (model-estimated) reward prediction errors (RPE) per trial in a reinforcement learning task in healthy adults (n=25). The level of RPE was assessed by 1) subjective ratings per trial and by 2) a computational model of reinforcement learning. As results, model-estimated RPE was highly correlated with subjective RPE (r=.82), and the grand-averaged ERP waves based on the trials with high and low model-estimated RPE showed the significant difference only in the time period of the FRN component (p<.05). Regardless of the time course of learning, FRN was associated with both subjective and model-estimated RPEs within subject (r=.47, p<.001; r=.40, p<.05) and between subjects (r=.33, p<.05; r=.41, p<.005) only in the Learnable condition where the internal reward prediction varied enough with a behavior-reward contingency.

  7. Comparison of the initial errors most likely to cause a spring predictability barrier for two types of El Niño events

    NASA Astrophysics Data System (ADS)

    Tian, Ben; Duan, Wansuo

    2016-08-01

    In this paper, the spring predictability barrier (SPB) problem for two types of El Niño events is investigated. This is enabled by tracing the evolution of a conditional nonlinear optimal perturbation (CNOP) that acts as the initial error with the biggest negative effect on the El Niño predictions. We show that the CNOP-type errors for central Pacific-El Niño (CP-El Niño) events can be classified into two types: the first are CP-type-1 errors possessing a sea surface temperature anomaly (SSTA) pattern with negative anomalies in the equatorial central western Pacific, positive anomalies in the equatorial eastern Pacific, and accompanied by a thermocline depth anomaly pattern with positive anomalies along the equator. The second are, CP-type-2 errors presenting an SSTA pattern in the central eastern equatorial Pacific, with a dipole structure of negative anomalies in the east and positive anomalies in the west, and a thermocline depth anomaly pattern with a slight deepening along the equator. CP-type-1 errors grow in a manner similar to an eastern Pacific-El Niño (EP-El Niño) event and grow significantly during boreal spring, leading to a significant SPB for the CP-El Niño. CP-type-2 errors initially present as a process similar to a La Niña-like decay, prior to transitioning into a growth phase of an EP-El Niño-like event; but they fail to cause a SPB. For the EP-El Niño events, the CNOP-type errors are also classified into two types: EP-type-1 errors and 2 errors. The former is similar to a CP-type-1 error, while the latter presents with an almost opposite pattern. Both EP-type-1 and 2 errors yield a significant SPB for EP-El Niño events. For both CP- and EP-El Niño, their CNOP-type errors that cause a prominent SPB are concentrated in the central and eastern tropical Pacific. This may indicate that the prediction uncertainties of both types of El Niño events are sensitive to the initial errors in this region. The region may represent a common

  8. Why Don't We Learn to Accurately Forecast Feelings? How Misremembering Our Predictions Blinds Us to Past Forecasting Errors

    ERIC Educational Resources Information Center

    Meyvis, Tom; Ratner, Rebecca K.; Levav, Jonathan

    2010-01-01

    Why do affective forecasting errors persist in the face of repeated disconfirming evidence? Five studies demonstrate that people misremember their forecasts as consistent with their experience and thus fail to perceive the extent of their forecasting error. As a result, people do not learn from past forecasting errors and fail to adjust subsequent…

  9. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation

    PubMed Central

    Cacciapaglia, Fabio; Wightman, R. Mark; Carelli, Regina M.

    2015-01-01

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. SIGNIFICANCE STATEMENT Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have

  10. The effects of methylphenidate on cerebral responses to conflict anticipation and unsigned prediction error in a stop-signal task.

    PubMed

    Manza, Peter; Hu, Sien; Ide, Jaime S; Farr, Olivia M; Zhang, Sheng; Leung, Hoi-Chung; Li, Chiang-shan R

    2016-03-01

    To adapt flexibly to a rapidly changing environment, humans must anticipate conflict and respond to surprising, unexpected events. To this end, the brain estimates upcoming conflict on the basis of prior experience and computes unsigned prediction error (UPE). Although much work implicates catecholamines in cognitive control, little is known about how pharmacological manipulation of catecholamines affects the neural processes underlying conflict anticipation and UPE computation. We addressed this issue by imaging 24 healthy young adults who received a 45 mg oral dose of methylphenidate (MPH) and 62 matched controls who did not receive MPH prior to performing the stop-signal task. We used a Bayesian Dynamic Belief Model to make trial-by-trial estimates of conflict and UPE during task performance. Replicating previous research, the control group showed anticipation-related activation in the presupplementary motor area and deactivation in the ventromedial prefrontal cortex and parahippocampal gyrus, as well as UPE-related activations in the dorsal anterior cingulate, insula, and inferior parietal lobule. In group comparison, MPH increased anticipation activity in the bilateral caudate head and decreased UPE activity in each of the aforementioned regions. These findings highlight distinct effects of catecholamines on the neural mechanisms underlying conflict anticipation and UPE, signals critical to learning and adaptive behavior.

  11. The effects of methylphenidate on cerebral responses to conflict anticipation and unsigned prediction error in a stop-signal task

    PubMed Central

    Manza, Peter; Hu, Sien; Ide, Jaime S; Farr, Olivia M; Zhang, Sheng; Leung, Hoi-Chung; Li, Chiang-shan R

    2016-01-01

    To adapt flexibly to a rapidly changing environment, humans must anticipate conflict and respond to surprising, unexpected events. To this end, the brain estimates upcoming conflict on the basis of prior experience and computes unsigned prediction error (UPE). Although much work implicates catecholamines in cognitive control, little is known about how pharmacological manipulation of catecholamines affects the neural processes underlying conflict anticipation and UPE computation. We addressed this issue by imaging 24 healthy young adults who received a 45 mg oral dose of methylphenidate (MPH) and 62 matched controls who did not receive MPH prior to performing the stop-signal task. We used a Bayesian Dynamic Belief Model to make trial-by-trial estimates of conflict and UPE during task performance. Replicating previous research, the control group showed anticipation-related activation in the presupplementary motor area and deactivation in the ventromedial prefrontal cortex and parahippocampal gyrus, as well as UPE-related activations in the dorsal anterior cingulate, insula, and inferior parietal lobule. In group comparison, MPH increased anticipation activity in the bilateral caudate head and decreased UPE activity in each of the aforementioned regions. These findings highlight distinct effects of catecholamines on the neural mechanisms underlying conflict anticipation and UPE, signals critical to learning and adaptive behavior. PMID:26755547

  12. Modeling dopaminergic and other processes involved in learning from reward prediction error: contributions from an individual differences perspective.

    PubMed

    Pickering, Alan D; Pesola, Francesca

    2014-01-01

    Phasic firing changes of midbrain dopamine neurons have been widely characterized as reflecting a reward prediction error (RPE). Major personality traits (e.g., extraversion) have been linked to inter-individual variations in dopaminergic neurotransmission. Consistent with these two claims, recent research (Smillie et al., 2011; Cooper et al., 2014) found that extraverts exhibited larger RPEs than introverts, as reflected in feedback related negativity (FRN) effects in EEG recordings. Using an established, biologically-localized RPE computational model, we successfully simulated dopaminergic cell firing changes which are thought to modulate the FRN. We introduced simulated individual differences into the model: parameters were systematically varied, with stable values for each simulated individual. We explored whether a model parameter might be responsible for the observed covariance between extraversion and the FRN changes in real data, and argued that a parameter is a plausible source of such covariance if parameter variance, across simulated individuals, correlated almost perfectly with the size of the simulated dopaminergic FRN modulation, and created as much variance as possible in this simulated output. Several model parameters met these criteria, while others did not. In particular, variations in the strength of connections carrying excitatory reward drive inputs to midbrain dopaminergic cells were considered plausible candidates, along with variations in a parameter which scales the effects of dopamine cell firing bursts on synaptic modification in ventral striatum. We suggest possible neurotransmitter mechanisms underpinning these model parameters. Finally, the limitations and possible extensions of our general approach are discussed.

  13. Physics of negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Abraham, Eitan; Penrose, Oliver

    2017-01-01

    Negative absolute temperatures were introduced into experimental physics by Purcell and Pound, who successfully applied this concept to nuclear spins; nevertheless, the concept has proved controversial: a recent article aroused considerable interest by its claim, based on a classical entropy formula (the "volume entropy") due to Gibbs, that negative temperatures violated basic principles of statistical thermodynamics. Here we give a thermodynamic analysis that confirms the negative-temperature interpretation of the Purcell-Pound experiments. We also examine the principal arguments that have been advanced against the negative temperature concept; we find that these arguments are not logically compelling, and moreover that the underlying "volume" entropy formula leads to predictions inconsistent with existing experimental results on nuclear spins. We conclude that, despite the counterarguments, negative absolute temperatures make good theoretical sense and did occur in the experiments designed to produce them.

  14. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  15. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd.

  16. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  17. Assessing the predictive performance of risk-based water quality criteria using decision error estimates from receiver operating characteristics (ROC) analysis.

    PubMed

    McLaughlin, Douglas B

    2012-10-01

    Field data relating aquatic ecosystem responses with water quality constituents that are potential ecosystem stressors are being used increasingly in the United States in the derivation of water quality criteria to protect aquatic life. In light of this trend, there is a need for transparent quantitative methods to assess the performance of models that predict ecological conditions using a stressor-response relationship, a response variable threshold, and a stressor variable criterion. Analysis of receiver operating characteristics (ROC analysis) has a considerable history of successful use in medical diagnostic, industrial, and other fields for similarly structured decision problems, but its use for informing water quality management decisions involving risk-based environmental criteria is less common. In this article, ROC analysis is used to evaluate predictions of ecological response variable status for 3 water quality stressor-response data sets. Information on error rates is emphasized due in part to their common use in environmental studies to describe uncertainty. One data set is comprised of simulated data, and 2 involve field measurements described previously in the literature. These data sets are also analyzed using linear regression and conditional probability analysis for comparison. Results indicate that of the methods studied, ROC analysis provides the most comprehensive characterization of prediction error rates including false positive, false negative, positive predictive, and negative predictive errors. This information may be used along with other data analysis procedures to set quality objectives for and assess the predictive performance of risk-based criteria to support water quality management decisions.

  18. Processing of action- but not stimulus-related prediction errors differs between active and observational feedback learning.

    PubMed

    Kobza, Stefan; Bellebaum, Christian

    2015-01-01

    Learning of stimulus-response-outcome associations is driven by outcome prediction errors (PEs). Previous studies have shown larger PE-dependent activity in the striatum for learning from own as compared to observed actions and the following outcomes despite comparable learning rates. We hypothesised that this finding relates primarily to a stronger integration of action and outcome information in active learners. Using functional magnetic resonance imaging, we investigated brain activations related to action-dependent PEs, reflecting the deviation between action values and obtained outcomes, and action-independent PEs, reflecting the deviation between subjective values of response-preceding cues and obtained outcomes. To this end, 16 active and 15 observational learners engaged in a probabilistic learning card-guessing paradigm. On each trial, active learners saw one out of five cues and pressed either a left or right response button to receive feedback (monetary win or loss). Each observational learner observed exactly those cues, responses and outcomes of one active learner. Learning performance was assessed in active test trials without feedback and did not differ between groups. For both types of PEs, activations were found in the globus pallidus, putamen, cerebellum, and insula in active learners. However, only for action-dependent PEs, activations in these structures and the anterior cingulate were increased in active relative to observational learners. Thus, PE-related activity in the reward system is not generally enhanced in active relative to observational learning but only for action-dependent PEs. For the cerebellum, additional activations were found across groups for cue-related uncertainty, thereby emphasising the cerebellum's role in stimulus-outcome learning.

  19. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    PubMed

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  20. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model

    PubMed Central

    Aberg, Kristoffer C.; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  1. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error

    PubMed Central

    Verduzco-Flores, Sergio O.; O'Reilly, Randall C.

    2015-01-01

    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations. PMID:25852535

  2. Absolute and relative blindsight.

    PubMed

    Balsdon, Tarryn; Azzopardi, Paul

    2015-03-01

    The concept of relative blindsight, referring to a difference in conscious awareness between conditions otherwise matched for performance, was introduced by Lau and Passingham (2006) as a way of identifying the neural correlates of consciousness (NCC) in fMRI experiments. By analogy, absolute blindsight refers to a difference between performance and awareness regardless of whether it is possible to match performance across conditions. Here, we address the question of whether relative and absolute blindsight in normal observers can be accounted for by response bias. In our replication of Lau and Passingham's experiment, the relative blindsight effect was abolished when performance was assessed by means of a bias-free 2AFC task or when the criterion for awareness was varied. Furthermore, there was no evidence of either relative or absolute blindsight when both performance and awareness were assessed with bias-free measures derived from confidence ratings using signal detection theory. This suggests that both relative and absolute blindsight in normal observers amount to no more than variations in response bias in the assessment of performance and awareness. Consideration of the properties of psychometric functions reveals a number of ways in which relative and absolute blindsight could arise trivially and elucidates a basis for the distinction between Type 1 and Type 2 blindsight.

  3. Standardized Software for Wind Load Forecast Error Analyses and Predictions Based on Wavelet-ARIMA Models - Applications at Multiple Geographically Distributed Wind Farms

    SciTech Connect

    Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.

    2013-03-19

    Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (‘stochastic’) model with the weather forecast model (‘deterministic’) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.

  4. Predicting sex offender recidivism. I. Correcting for item overselection and accuracy overestimation in scale development. II. Sampling error-induced attenuation of predictive validity over base rate information.

    PubMed

    Vrieze, Scott I; Grove, William M

    2008-06-01

    The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in

  5. Absolute calibration of optical tweezers

    SciTech Connect

    Viana, N.B.; Mazolli, A.; Maia Neto, P.A.; Nussenzveig, H.M.; Rocha, M.S.; Mesquita, O.N.

    2006-03-27

    As a step toward absolute calibration of optical tweezers, a first-principles theory of trapping forces with no adjustable parameters, corrected for spherical aberration, is experimentally tested. Employing two very different setups, we find generally very good agreement for the transverse trap stiffness as a function of microsphere radius for a broad range of radii, including the values employed in practice, and at different sample chamber depths. The domain of validity of the WKB ('geometrical optics') approximation to the theory is verified. Theoretical predictions for the trapping threshold, peak position, depth variation, multiple equilibria, and 'jump' effects are also confirmed.

  6. Phasic dopamine as a prediction error of intrinsic and extrinsic reinforcements driving both action acquisition and reward maximization: a simulated robotic study.

    PubMed

    Mirolli, Marco; Santucci, Vieri G; Baldassarre, Gianluca

    2013-03-01

    An important issue of recent neuroscientific research is to understand the functional role of the phasic release of dopamine in the striatum, and in particular its relation to reinforcement learning. The literature is split between two alternative hypotheses: one considers phasic dopamine as a reward prediction error similar to the computational TD-error, whose function is to guide an animal to maximize future rewards; the other holds that phasic dopamine is a sensory prediction error signal that lets the animal discover and acquire novel actions. In this paper we propose an original hypothesis that integrates these two contrasting positions: according to our view phasic dopamine represents a TD-like reinforcement prediction error learning signal determined by both unexpected changes in the environment (temporary, intrinsic reinforcements) and biological rewards (permanent, extrinsic reinforcements). Accordingly, dopamine plays the functional role of driving both the discovery and acquisition of novel actions and the maximization of future rewards. To validate our hypothesis we perform a series of experiments with a simulated robotic system that has to learn different skills in order to get rewards. We compare different versions of the system in which we vary the composition of the learning signal. The results show that only the system reinforced by both extrinsic and intrinsic reinforcements is able to reach high performance in sufficiently complex conditions.

  7. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning

    PubMed Central

    Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor

  8. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    SciTech Connect

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  9. Temporal Uncertainty and Temporal Estimation Errors Affect Insular Activity and the Frontostriatal Indirect Pathway during Action Update: A Predictive Coding Study

    PubMed Central

    Limongi, Roberto; Pérez, Francisco J.; Modroño, Cristián; González-Mora, José L.

    2016-01-01

    Action update, substituting a prepotent behavior with a new action, allows the organism to counteract surprising environmental demands. However, action update fails when the organism is uncertain about when to release the substituting behavior, when it faces temporal uncertainty. Predictive coding states that accurate perception demands minimization of precise prediction errors. Activity of the right anterior insula (rAI) is associated with temporal uncertainty. Therefore, we hypothesize that temporal uncertainty during action update would cause the AI to decrease the sensitivity to ascending prediction errors. Moreover, action update requires response inhibition which recruits the frontostriatal indirect pathway associated with motor control. Therefore, we also hypothesize that temporal estimation errors modulate frontostriatal connections. To test these hypotheses, we collected fMRI data when participants performed an action-update paradigm within the context of temporal estimation. We fit dynamic causal models to the imaging data. Competing models comprised the inferior occipital gyrus (IOG), right supramarginal gyrus (rSMG), rAI, right presupplementary motor area (rPreSMA), and the right striatum (rSTR). The winning model showed that temporal uncertainty drove activity into the rAI and decreased insular sensitivity to ascending prediction errors, as shown by weak connectivity strength of rSMG→rAI connections. Moreover, temporal estimation errors weakened rPreSMA→rSTR connections and also modulated rAI→rSTR connections, causing the disruption of action update. Results provide information about the neurophysiological implementation of the so-called horse-race model of action control. We suggest that, contrary to what might be believed, unsuccessful action update could be a homeostatic process that represents a Bayes optimal encoding of uncertainty. PMID:27445737

  10. Temporal Uncertainty and Temporal Estimation Errors Affect Insular Activity and the Frontostriatal Indirect Pathway during Action Update: A Predictive Coding Study.

    PubMed

    Limongi, Roberto; Pérez, Francisco J; Modroño, Cristián; González-Mora, José L

    2016-01-01

    Action update, substituting a prepotent behavior with a new action, allows the organism to counteract surprising environmental demands. However, action update fails when the organism is uncertain about when to release the substituting behavior, when it faces temporal uncertainty. Predictive coding states that accurate perception demands minimization of precise prediction errors. Activity of the right anterior insula (rAI) is associated with temporal uncertainty. Therefore, we hypothesize that temporal uncertainty during action update would cause the AI to decrease the sensitivity to ascending prediction errors. Moreover, action update requires response inhibition which recruits the frontostriatal indirect pathway associated with motor control. Therefore, we also hypothesize that temporal estimation errors modulate frontostriatal connections. To test these hypotheses, we collected fMRI data when participants performed an action-update paradigm within the context of temporal estimation. We fit dynamic causal models to the imaging data. Competing models comprised the inferior occipital gyrus (IOG), right supramarginal gyrus (rSMG), rAI, right presupplementary motor area (rPreSMA), and the right striatum (rSTR). The winning model showed that temporal uncertainty drove activity into the rAI and decreased insular sensitivity to ascending prediction errors, as shown by weak connectivity strength of rSMG→rAI connections. Moreover, temporal estimation errors weakened rPreSMA→rSTR connections and also modulated rAI→rSTR connections, causing the disruption of action update. Results provide information about the neurophysiological implementation of the so-called horse-race model of action control. We suggest that, contrary to what might be believed, unsuccessful action update could be a homeostatic process that represents a Bayes optimal encoding of uncertainty.

  11. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  12. Comparisons of prediction abilities of augmented classical least squares and partial least squares with realistic simulated data : effects of uncorrelated and correlated errors with nonlinearities.

    SciTech Connect

    Haaland, David Michael; Melgaard, David Kennett

    2003-06-01

    A manuscript describing this work summarized below has been submitted to Applied Spectroscopy. Comparisons of prediction models from the new ACLS and PLS multivariate spectral analysis methods were conducted using simulated data with deviations from the idealized model. Simulated uncorrelated concentration errors, and uncorrelated and correlated spectral noise were included to evaluate the methods on situations representative of experimental data. The simulations were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions containing glucose, urea, ethanol, and NaCl in the concentration range from 0-500 mg/dL. The statistical significance of differences was evaluated using the Wilcoxon signed rank test. The prediction abilities with nonlinearities present were similar for both calibration methods although concentration noise, number of samples, and spectral noise distribution sometimes affected one method more than the other. In the case of ideal errors and in the presence of nonlinear spectral responses, the differences between the standard error of predictions of the two methods were sometimes statistically significant, but the differences were always small in magnitude. Importantly, SRACLS was found to be competitive with PLS when component concentrations were only known for a single component. Thus, SRACLS has a distinct advantage over standard CLS methods that require that all spectral components be included in the model. In contrast to simulations with ideal error, SRACLS often generated models with superior prediction performance relative to PLS when the simulations were more realistic and included either non-uniform errors and/or correlated errors. Since the generalized ACLS algorithm is compatible with the PACLS method that allows rapid updating of models during prediction, the powerful combination of PACLS with ACLS is very promising for rapidly maintaining and transferring models for system

  13. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  14. Prediction and error growth in the daily forecast of precipitation from the NCEP CFSv2 over the subdivisions of Indian subcontinent

    NASA Astrophysics Data System (ADS)

    Pandey, Dhruva Kumar; Rai, Shailendra; Sahai, A. K.; Abhilash, S.; Shahi, N. K.

    2016-02-01

    This study investigates the forecast skill and predictability of various indices of south Asian monsoon as well as the subdivisions of the Indian subcontinent during JJAS season for the time domain of 2001-2013 using NCEP CFSv2 output. It has been observed that the daily mean climatology of precipitation over the land points of India is underestimated in the model forecast as compared to observation. The monthly model bias of precipitation shows the dry bias over the land points of India and also over the Bay of Bengal, whereas the Himalayan and Arabian Sea regions show the wet bias. We have divided the Indian landmass into five subdivisions namely central India, southern India, Western Ghat, northeast and southern Bay of Bengal regions based on the spatial variation of observed mean precipitation in JJAS season. The underestimation over the land points of India during mature phase was originated from the central India, southern Bay of Bengal, southern India and Western Ghat regions. The error growth in June forecast is slower as compared to July forecast in all the regions. The predictability error also grows slowly in June forecast as compared to July forecast in most of the regions. The doubling time of predictability error was estimated to be in the range of 3-5 days for all the regions. Southern India and Western Ghats are more predictable in the July forecast as compared to June forecast, whereas IMR, northeast, central India and southern Bay of Bengal regions have the opposite nature.

  15. Effects of nonlinearities and uncorrelated or correlated errors in realistic simulated data on the prediction abilities of augmented classical least squares and partial least squares.

    PubMed

    Melgaard, David K; Haaland, David M

    2004-09-01

    Comparisons of prediction models from the new augmented classical least squares (ACLS) and partial least squares (PLS) multivariate spectral analysis methods were conducted using simulated data containing deviations from the idealized model. The simulated data were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions. Simulated uncorrelated concentration errors, uncorrelated and correlated spectral noise, and nonlinear spectral responses were included to evaluate the methods on situations representative of experimental data. The statistical significance of differences in prediction ability was evaluated using the Wilcoxon signed rank test. The prediction differences were found to be dependent on the type of noise added, the numbers of calibration samples, and the component being predicted. For analyses applied to simulated spectra with noise-free nonlinear response, PLS was shown to be statistically superior to ACLS for most of the cases. With added uncorrelated spectral noise, both methods performed comparably. Using 50 calibration samples with simulated correlated spectral noise, PLS showed an advantage in 3 out of 9 cases, but the advantage dropped to 1 out of 9 cases with 25 calibration samples. For cases with different noise distributions between calibration and validation, ACLS predictions were statistically better than PLS for two of the four components. Also, when experimentally derived correlated spectral error was added, ACLS gave better predictions that were statistically significant in 15 out of 24 cases simulated. On data sets with nonuniform noise, neither method was statistically better, although ACLS usually had smaller standard errors of prediction (SEPs). The varying results emphasize the need to use realistic simulations when making comparisons between various multivariate calibration methods. Even when the differences between the standard error of predictions were statistically

  16. Evaluating the performance of the LPC (Linear Predictive Coding) 2.4 kbps (kilobits per second) processor with bit errors using a sentence verification task

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, Astrid; Kallman, Howard J.

    1987-11-01

    The comprehension of narrowband digital speech with bit errors was tested by using a sentence verification task. The use of predicates that were either strongly or weakly related to the subjects (e.g., A toad has warts./ A toad has eyes.) varied the difficulty of the verification task. The test conditions included unprocessed and processed speech using a 2.4 kb/s (kilobits per second) linear predictive coding (LPC) voice processing algorithm with random bit error rates of 0 percent, 2 percent, and 5 percent. In general, response accuracy decreased and reaction time increased with LPC processing and with increasing bit error rates. Weakly related true sentences and strongly related false sentences were more difficult than their counterparts. Interactions between sentence type and speech processing conditions are discussed.

  17. Errors in Representing Regional Acid Deposition with Spatially Sparse Monitoring: Case Studies of the Eastern US Using Model Predictions

    EPA Science Inventory

    The current study uses case studies of model-estimated regional precipitation and wet ion deposition to estimate errors in corresponding regional values derived from the means of site-specific values within regions of interest located in the eastern US. The mean of model-estimate...

  18. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  19. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  20. Absolute and Convective Instability of a Liquid Jet

    NASA Technical Reports Server (NTRS)

    Lin, S. P.; Hudman, M.; Chen, J. N.

    1999-01-01

    The existence of absolute instability in a liquid jet has been predicted for some time. The disturbance grows in time and propagates both upstream and downstream in an absolutely unstable liquid jet. The image of absolute instability is captured in the NASA 2.2 sec drop tower and reported here. The transition from convective to absolute instability is observed experimentally. The experimental results are compared with the theoretical predictions on the transition Weber number as functions of the Reynolds number. The role of interfacial shear relative to all other relevant forces which cause the onset of jet breakup is explained.

  1. I know what is missing here: electrophysiological prediction error signals elicited by omissions of predicted ”what” but not ”when”

    PubMed Central

    SanMiguel, Iria; Saupe, Katja; Schröger, Erich

    2013-01-01

    In the present study we investigated the neural code of sensory predictions. Grounded on a variety of empirical findings, we set out from the proposal that sensory predictions are coded via the top-down modulation of the sensory units whose response properties match the specific characteristics of the predicted stimulus (Albright, 2012; Arnal and Giraud, 2012). From this proposal, we derive the hypothesis that when the specific physical characteristics of the predicted stimulus cannot be advanced, the sensory system should not be able to formulate such predictions, as it would lack the means to represent them. In different conditions, participant's self-paced button presses predicted either only the precise time when a random sound would be presented (random sound condition) or both the timing and the identity of the sound (single sound condition). To isolate prediction-related activity, we inspected the event-related potential (ERP) elicited by rare omissions of the sounds following the button press (see SanMiguel et al., 2013). As expected, in the single sound condition, omissions elicited a complex response in the ERP, reflecting the presence of sound prediction and the violation of this prediction. In contrast, in the random sound condition, sound omissions were not followed by any significant responses in the ERP. These results confirmed our hypothesis, and provide support to current proposals advocating that sensory systems rely on the top-down modulation of stimulus-specific sensory representations as the neural code for prediction. In light of these findings, we discuss the significance of the omission ERP as an electrophysiological marker of predictive processing and we address the paradox that no indicators of violations of temporal prediction alone were found in the present paradigm. PMID:23908618

  2. Absolute Plate Velocities from Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Kreemer, Corné; Zheng, Lin; Gordon, Richard

    2015-04-01

    The orientation of seismic anisotropy inferred beneath plate interiors may provide a means to estimate the motions of the plate relative to the sub-asthenospheric mantle. Here we analyze two global sets of shear-wave splitting data, that of Kreemer [2009] and an updated and expanded data set, to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. We also explore the effect of using geologically current plate velocities (i.e., the MORVEL set of angular velocities [DeMets et al. 2010]) compared with geodetically current plate velocities (i.e., the GSRM v1.2 angular velocities [Kreemer et al. 2014]). We demonstrate that the errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. The SKS-MORVEL absolute plate angular velocities (based on the Kreemer [2009] data set) are determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11° Ma-1 (95% confidence limits) right-handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2° ) differs insignificantly from that for continental lithosphere (σ=21.6° ). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4° ) than for continental

  3. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  4. Absolute airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Baumann, Henri

    This work consists of a feasibility study of a first stage prototype airborne absolute gravimeter system. In contrast to relative systems, which are using spring gravimeters, the measurements acquired by absolute systems are uncorrelated and the instrument is not suffering from problems like instrumental drift, frequency response of the spring and possible variation of the calibration factor. The major problem we had to resolve were to reduce the influence of the non-gravitational accelerations included in the measurements. We studied two different approaches to resolve it: direct mechanical filtering, and post-processing digital compensation. The first part of the work describes in detail the different mechanical passive filters of vibrations, which were studied and tested in the laboratory and later in a small truck in movement. For these tests as well as for the airborne measurements an absolute gravimeter FG5-L from Micro-G Ltd was used together with an Inertial navigation system Litton-200, a vertical accelerometer EpiSensor, and GPS receivers for positioning. These tests showed that only the use of an optical table gives acceptable results. However, it is unable to compensate for the effects of the accelerations of the drag free chamber. The second part describes the strategy of the data processing. It is based on modeling the perturbing accelerations by means of GPS, EpiSensor and INS data. In the third part the airborne experiment is described in detail, from the mounting in the aircraft and data processing to the different problems encountered during the evaluation of the quality and accuracy of the results. In the part of data processing the different steps conducted from the raw apparent gravity data and the trajectories to the estimation of the true gravity are explained. A comparison between the estimated airborne data and those obtained by ground upward continuation at flight altitude allows to state that airborne absolute gravimetry is feasible and

  5. Study of Uncertainties of Predicting Space Shuttle Thermal Environment. [impact of heating rate prediction errors on weight of thermal protection system

    NASA Technical Reports Server (NTRS)

    Fehrman, A. L.; Masek, R. V.

    1972-01-01

    Quantitative estimates of the uncertainty in predicting aerodynamic heating rates for a fully reusable space shuttle system are developed and the impact of these uncertainties on Thermal Protection System (TPS) weight are discussed. The study approach consisted of statistical evaluations of the scatter of heating data on shuttle configurations about state-of-the-art heating prediction methods to define the uncertainty in these heating predictions. The uncertainties were then applied as heating rate increments to the nominal predicted heating rate to define the uncertainty in TPS weight. Separate evaluations were made for the booster and orbiter, for trajectories which included boost through reentry and touchdown. For purposes of analysis, the vehicle configuration is divided into areas in which a given prediction method is expected to apply, and separate uncertainty factors and corresponding uncertainty in TPS weight derived for each area.

  6. Prospects for the Moon as an SI-Traceable Absolute Spectroradiometric Standard for Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Cramer, C. E.; Stone, T. C.; Lykke, K.; Woodward, J. T.

    2015-12-01

    The Earth's Moon has many physical properties that make it suitable for use as a reference light source for radiometric calibration of remote sensing satellite instruments. Lunar calibration has been successfully applied to many imagers in orbit, including both MODIS instruments and NPP-VIIRS, using the USGS ROLO model to predict the reference exoatmospheric lunar irradiance. Sensor response trending was developed for SeaWIFS with a relative accuracy better than 0.1 % per year with lunar calibration techniques. However, the Moon rarely is used as an absolute reference for on-orbit calibration, primarily due to uncertainties in the ROLO model absolute scale of 5%-10%. But this limitation lies only with the models - the Moon itself is radiometrically stable, and development of a high-accuracy absolute lunar reference is inherently feasible. A program has been undertaken by NIST to collect absolute measurements of the lunar spectral irradiance with absolute accuracy <1 % (k=2), traceable to SI radiometric units. Initial Moon observations were acquired from the Whipple Observatory on Mt. Hopkins, Arizona, elevation 2367 meters, with continuous spectral coverage from 380 nm to 1040 nm at ~3 nm resolution. The lunar spectrometer acquired calibration measurements several times each observing night by pointing to a calibrated integrating sphere source. The lunar spectral irradiance at the top of the atmosphere was derived from a time series of ground-based measurements by a Langley analysis that incorporated measured atmospheric conditions and ROLO model predictions for the change in irradiance resulting from the changing Sun-Moon-Observer geometry throughout each night. Two nights were selected for further study. An extensive error analysis, which includes instrument calibration and atmospheric correction terms, shows a combined standard uncertainty under 1 % over most of the spectral range. Comparison of these two nights' spectral irradiance measurements with predictions

  7. Determining what caused the error in the prediction of the December 1st, 2013 snow storm using the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Prajapati, Nikunjkumar; Trout, Joseph

    2014-03-01

    The severity of snow events in the northeast United States depends on the position of the pressure systems and the fronts. Although numerical models have improved greatly as computer power has increased, occasionally the forecasts of the pressure systems and fronts can have large margins of error. For example, the snow storm which passed over the north east coast on the week of December 1, 2013, which proved to be much more severe than predicted. In this research, The Weather Research and Forecasting Model(WRF-Model) is used to model the December 1, 2013 storm. Multiple simulations using nested, high resolution grids are compared. Research in computational atmospheric physics.

  8. Development and application of an empirical probability distribution for the prediction error of re-entry body maximum dynamic pressure

    NASA Technical Reports Server (NTRS)

    Lanzi, R. James; Vincent, Brett T.

    1993-01-01

    The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.

  9. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  10. Absolute multilateration between spheres

    NASA Astrophysics Data System (ADS)

    Muelaner, Jody; Wadsworth, William; Azini, Maria; Mullineux, Glen; Hughes, Ben; Reichold, Armin

    2017-04-01

    Environmental effects typically limit the accuracy of large scale coordinate measurements in applications such as aircraft production and particle accelerator alignment. This paper presents an initial design for a novel measurement technique with analysis and simulation showing that that it could overcome the environmental limitations to provide a step change in large scale coordinate measurement accuracy. Referred to as absolute multilateration between spheres (AMS), it involves using absolute distance interferometry to directly measure the distances between pairs of plain steel spheres. A large portion of each sphere remains accessible as a reference datum, while the laser path can be shielded from environmental disturbances. As a single scale bar this can provide accurate scale information to be used for instrument verification or network measurement scaling. Since spheres can be simultaneously measured from multiple directions, it also allows highly accurate multilateration-based coordinate measurements to act as a large scale datum structure for localized measurements, or to be integrated within assembly tooling, coordinate measurement machines or robotic machinery. Analysis and simulation show that AMS can be self-aligned to achieve a theoretical combined standard uncertainty for the independent uncertainties of an individual 1 m scale bar of approximately 0.49 µm. It is also shown that combined with a 1 µm m‑1 standard uncertainty in the central reference system this could result in coordinate standard uncertainty magnitudes of 42 µm over a slender 1 m by 20 m network. This would be a sufficient step change in accuracy to enable next generation aerospace structures with natural laminar flow and part-to-part interchangeability.

  11. Verbal Paradata and Survey Error: Respondent Speech, Voice, and Question-Answering Behavior Can Predict Income Item Nonresponse

    ERIC Educational Resources Information Center

    Jans, Matthew E.

    2010-01-01

    Income nonresponse is a significant problem in survey data, with rates as high as 50%, yet we know little about why it occurs. It is plausible that the way respondents answer survey questions (e.g., their voice and speech characteristics, and their question- answering behavior) can predict whether they will provide income data, and will reflect…

  12. Predictions of the Reliability Coefficients and Standard Errors of Measurement Using the Test Information Function and Its Modifications.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    Because the test information function and its two modified formulas provide useful information, the reliability coefficient of a test is no longer necessary in modern mental test theory. Yet it is interesting to know how to predict the coefficient using the test information function and its modifications, tailored for each separate population of…

  13. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    SciTech Connect

    Morley, Steven Karl

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  14. Absolute measurement of length with nanometric resolution

    NASA Astrophysics Data System (ADS)

    Apostol, D.; Garoi, F.; Timcu, A.; Damian, V.; Logofatu, P. C.; Nascov, V.

    2005-08-01

    Laser interferometer displacement measuring transducers have a well-defined traceability route to the definition of the meter. The laser interferometer is de-facto length scale for applications in micro and nano technologies. However their physical unit -half lambda is too large for nanometric resolution. Fringe interpolation-usual technique to improve the resolution-lack of reproducibility could be avoided using the principles of absolute distance measurement. Absolute distance refers to the use of interferometric techniques for determining the position of an object without the necessity of measuring continuous displacements between points. The interference pattern as produced by the interference of two point-like coherent sources is fitted to a geometric model so as to determine the longitudinal location of the target by minimizing least square errors. The longitudinal coordinate of the target was measured with accuracy better than 1 nm, for a target position range of 0.4μm.

  15. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  16. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  17. An absolute radius scale for Saturn's rings

    NASA Technical Reports Server (NTRS)

    Nicholson, Philip D.; Cooke, Maren L.; Pelton, Emily

    1990-01-01

    Radio and stellar occultation observations of Saturn's rings made by the Voyager spacecraft are discussed. The data reveal systematic discrepancies of almost 10 km in some parts of the rings, limiting some of the investigations. A revised solution for Saturn's rotation pole has been proposed which removes the discrepancies between the stellar and radio occultation profiles. Corrections to previously published radii vary from -2 to -10 km for the radio occultation, and +5 to -6 km for the stellar occultation. An examination of spiral density waves in the outer A Ring supports that the revised absolute radii are in error by no more than 2 km.

  18. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  19. Estimating Absolute Site Effects

    SciTech Connect

    Malagnini, L; Mayeda, K M; Akinci, A; Bragato, P L

    2004-07-15

    The authors use previously determined direct-wave attenuation functions as well as stable, coda-derived source excitation spectra to isolate the absolute S-wave site effect for the horizontal and vertical components of weak ground motion. They used selected stations in the seismic network of the eastern Alps, and find the following: (1) all ''hard rock'' sites exhibited deamplification phenomena due to absorption at frequencies ranging between 0.5 and 12 Hz (the available bandwidth), on both the horizontal and vertical components; (2) ''hard rock'' site transfer functions showed large variability at high-frequency; (3) vertical-motion site transfer functions show strong frequency-dependence, and (4) H/V spectral ratios do not reproduce the characteristics of the true horizontal site transfer functions; (5) traditional, relative site terms obtained by using reference ''rock sites'' can be misleading in inferring the behaviors of true site transfer functions, since most rock sites have non-flat responses due to shallow heterogeneities resulting from varying degrees of weathering. They also use their stable source spectra to estimate total radiated seismic energy and compare against previous results. they find that the earthquakes in this region exhibit non-constant dynamic stress drop scaling which gives further support for a fundamental difference in rupture dynamics between small and large earthquakes. To correct the vertical and horizontal S-wave spectra for attenuation, they used detailed regional attenuation functions derived by Malagnini et al. (2002) who determined frequency-dependent geometrical spreading and Q for the region. These corrections account for the gross path effects (i.e., all distance-dependent effects), although the source and site effects are still present in the distance-corrected spectra. The main goal of this study is to isolate the absolute site effect (as a function of frequency) by removing the source spectrum (moment-rate spectrum) from

  20. Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

    PubMed

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2015-10-28

    Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world.

  1. Using Air Temperature to Quantitatively Predict the MODIS Fractional Snow Cover Retrieval Errors over the Continental US (CONUS)

    NASA Technical Reports Server (NTRS)

    Dong, Jiarui; Ek, Mike; Hall, Dorothy K.; Peters-Lidard, Christa; Cosgrove, Brian; Miller, Jeff; Riggs, George A.; Xia, Youlong

    2013-01-01

    In the middle to high latitude and alpine regions, the seasonal snow pack can dominate the surface energy and water budgets due to its high albedo, low thermal conductivity, high emissivity, considerable spatial and temporal variability, and ability to store and then later release a winters cumulative snowfall (Cohen, 1994; Hall, 1998). With this in mind, the snow drought across the U.S. has raised questions about impacts on water supply, ski resorts and agriculture. Knowledge of various snow pack properties is crucial for short-term weather forecasts, climate change prediction, and hydrologic forecasting for producing reliable daily to seasonal forecasts. One potential source of this information is the multi-institution North American Land Data Assimilation System (NLDAS) project (Mitchell et al., 2004). Real-time NLDAS products are used for drought monitoring to support the National Integrated Drought Information System (NIDIS) and as initial conditions for a future NCEP drought forecast system. Additionally, efforts are currently underway to assimilate remotely-sensed estimates of land-surface states such as snowpack information into NLDAS. It is believed that this assimilation will not only produce improved snowpack states that better represent snow evolving conditions, but will directly improve the monitoring of drought.

  2. Major Source of Error in QSPR Prediction of Intrinsic Thermodynamic Solubility of Drugs: Solid vs Nonsolid State Contributions?

    PubMed

    Abramov, Yuriy A

    2015-06-01

    The main purpose of this study is to define the major limiting factor in the accuracy of the quantitative structure-property relationship (QSPR) models of the thermodynamic intrinsic aqueous solubility of the drug-like compounds. For doing this, the thermodynamic intrinsic aqueous solubility property was suggested to be indirectly "measured" from the contributions of solid state, ΔGfus, and nonsolid state, ΔGmix, properties, which are estimated by the corresponding QSPR models. The QSPR models of ΔGfus and ΔGmix properties were built based on a set of drug-like compounds with available accurate measurements of fusion and thermodynamic solubility properties. For consistency ΔGfus and ΔGmix models were developed using similar algorithms and descriptor sets, and validated against the similar test compounds. Analysis of the relative performances of these two QSPR models clearly demonstrates that it is the solid state contribution which is the limiting factor in the accuracy and predictive power of the QSPR models of the thermodynamic intrinsic solubility. The performed analysis outlines a necessity of development of new descriptor sets for an accurate description of the long-range order (periodicity) phenomenon in the crystalline state. The proposed approach to the analysis of limitations and suggestions for improvement of QSPR-type models may be generalized to other applications in the pharmaceutical industry.

  3. Possible sources of forecast errors generated by the global/regional assimilation and prediction system for landfalling tropical cyclones. Part I: Initial uncertainties

    NASA Astrophysics Data System (ADS)

    Zhou, Feifan; Yamaguchi, Munehiko; Qin, Xiaohao

    2016-07-01

    This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made for 16 landfalling TCs in the western North Pacific basin during the 2008 and 2009 seasons, with a forecast length of 72 hours, and using the default initial conditions ("initials", hereafter), which are from the NCEP-FNL dataset, as well as ECMWF initials. The forecasts are compared with ECMWF forecasts. The results show that in most TCs, the GRAPES forecasts are improved when using the ECMWF initials compared with the default initials. Compared with the ECMWF initials, the default initials produce lower intensity TCs and a lower intensity subtropical high, but a higher intensity South Asia high and monsoon trough, as well as a higher temperature but lower specific humidity at the TC center. Replacement of the geopotential height and wind fields with the ECMWF initials in and around the TC center at the initial time was found to be the most efficient way to improve the forecasts. In addition, TCs that showed the greatest improvement in forecast accuracy usually had the largest initial uncertainties in TC intensity and were usually in the intensifying phase. The results demonstrate the importance of the initial intensity for TC track forecasts made using GRAPES, and indicate the model is better in describing the intensifying phase than the decaying phase of TCs. Finally, the limit of the improvement indicates that the model error associated with GRAPES forecasts may be the main cause of poor forecasts of landfalling TCs. Thus, further examinations of the model errors are required.

  4. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  5. Machine learning in the prediction of cardiac epicardial and mediastinal fat volumes.

    PubMed

    Rodrigues, É O; Pinheiro, V H A; Liatsis, P; Conci, A

    2017-02-24

    We propose a methodology to predict the cardiac epicardial and mediastinal fat volumes in computed tomography images using regression algorithms. The obtained results indicate that it is feasible to predict these fats with a high degree of correlation, thus alleviating the requirement for manual or automatic segmentation of both fat volumes. Instead, segmenting just one of them suffices, while the volume of the other may be predicted fairly precisely. The correlation coefficient obtained by the Rotation Forest algorithm using MLP Regressor for predicting the mediastinal fat based on the epicardial fat was 0.9876, with a relative absolute error of 14.4% and a root relative squared error of 15.7%. The best correlation coefficient obtained in the prediction of the epicardial fat based on the mediastinal was 0.9683 with a relative absolute error of 19.6% and a relative squared error of 24.9%. Moreover, we analysed the feasibility of using linear regressors, which provide an intuitive interpretation of the underlying approximations. In this case, the obtained correlation coefficient was 0.9534 for predicting the mediastinal fat based on the epicardial, with a relative absolute error of 31.6% and a root relative squared error of 30.1%. On the prediction of the epicardial fat based on the mediastinal fat, the correlation coefficient was 0.8531, with a relative absolute error of 50.43% and a root relative squared error of 52.06%. In summary, it is possible to speed up general medical analyses and some segmentation and quantification methods that are currently employed in the state-of-the-art by using this prediction approach, which consequently reduces costs and therefore enables preventive treatments that may lead to a reduction of health problems.

  6. Isotherms and thermodynamics by linear and non-linear regression analysis for the sorption of methylene blue onto activated carbon: comparison of various error functions.

    PubMed

    Kumar, K Vasanth; Porkodi, K; Rocha, F

    2008-03-01

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r2), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.

  7. The effect of non-Gaussianity on error predictions for the Epoch of Reionization (EoR) 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman; Bera, Apurba; Acharyya, Ayan

    2015-04-01

    The Epoch of Reionization (EoR) 21-cm signal is expected to become increasingly non-Gaussian as reionization proceeds. We have used seminumerical simulations to study how this affects the error predictions for the EoR 21-cm power spectrum. We expect SNR=√{N_k} for a Gaussian random field where Nk is the number of Fourier modes in each k bin. We find that non-Gaussianity is important at high SNR where it imposes an upper limit [SNR]l. For a fixed volume V, it is not possible to achieve SNR > [SNR]l even if Nk is increased. The value of [SNR]l falls as reionization proceeds, dropping from ˜500 at bar{x}_{H I} = 0.8-0.9 to ˜10 at bar{x}_{H I} = 0.15 for a [150.08 Mpc]3 simulation. We show that it is possible to interpret [SNR]l in terms of the trispectrum, and we expect [SNR]_l ∝ √{V} if the volume is increased. For SNR ≪ [SNR]l we find SNR= √{N_k}/A with A ˜ 0.95-1.75, roughly consistent with the Gaussian prediction. We present a fitting formula for the SNR as a function of Nk, with two parameters A and [SNR]l that have to be determined using simulations. Our results are relevant for predicting the sensitivity of different instruments to measure the EoR 21-cm power spectrum, which till date have been largely based on the Gaussian assumption.

  8. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  9. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  10. Cryogenic, Absolute, High Pressure Sensor

    NASA Technical Reports Server (NTRS)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  11. Predicting Air Permeability of Handloom Fabrics: A Comparative Analysis of Regression and Artificial Neural Network Models

    NASA Astrophysics Data System (ADS)

    Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya

    2013-03-01

    This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.

  12. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  13. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  14. Absolute Gravity Measurements with the FG5#215 in Czech Republic, Slovakia and Hungary

    NASA Astrophysics Data System (ADS)

    Pálinkás, V.; Kostelecký, J.; Lederer, M.

    2009-04-01

    Since 2001, the absolute gravimeter FG5#215 has been used for modernization of national gravity networks in Czech Republic, Slovakia and Hungary. Altogether 37 absolute sites were measured at least once. In case of 29 sites, the absolute gravity has been determined prior to the FG5#215 by other accurate absolute meters (FG5 or JILA-g). Differences between gravity results, which reach up to 25 microgal, are caused by random and systematic errors of measurements, variations of environmental effects (mainly hydrological effects) and by geodynamics. The set of achieved differences is analyzed for potential hydrological effects based on global hydrology models and systematic errors of instrumental origin. Systematic instrumental errors are evaluated in context with accomplished international comparison measurements of absolute gravimeters in Sèvres and Walferdange organized by the Bureau International des Poids et Measures and European Center for Geodynamics and Seismology, respectively.

  15. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  16. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  17. Effective approach for calculations of absolute stability of proteins using focused dielectric constants.

    PubMed

    Vicatos, Spyridon; Roca, Maite; Warshel, Arieh

    2009-11-15

    The ability to predict the absolute stability of proteins based on their corresponding sequence and structure is a problem of great fundamental and practical importance. In this work, we report an extensive, refinement and validation of our recent approach (Roca et al., FEBS Lett 2007;581:2065-2071) for predicting absolute values of protein stability DeltaG(fold). This approach employs the semimacroscopic protein dipole Langevin dipole method in its linear response approximation version (PDLD/S-LRA) while using the best fitted values of the dielectric constants epsilon'(p) and epsilon'(eff) for the self energy and charge-charge interactions, respectively. The method is validated on a diverse set of 45 proteins. It is found that the best fitted values of both dielectric constants are around 40. However, the self energy of internal residues and the charge-charge interactions of Lys have to be treated with care, using a somewhat lower values of epsilon'(p) and epsilon'(eff). The predictions of DeltaG(fold) reported here, have an average error of only 1.8 kcal/mole compared to the observed values, making our method very promising for estimating protein stability. It also provides valuable insight into the complex electrostatic phenomena taking place in folded proteins.

  18. Predicting Conversion to Dementia of the Alzheimer Type in a Healthy Control Sample: The Power of Errors in Stroop Color Naming

    PubMed Central

    Balota, David A.; Tse, Chi-Shing; Hutchison, Keith A.; Spieler, Daniel H.; Duchek, Janet M.; Morris, John C.

    2009-01-01

    The present study investigates which cognitive functions in older adults at time A are predictive of conversion to dementia of the Alzheimer type (DAT) at time B. Forty-seven healthy individuals were initially tested in 1992–1994 on a trial-by-trial computerized Stroop task along with a battery of psychometric measures that tap general knowledge, declarative memory, visual spatial processing, and processing speed. Twelve of these individuals subsequently developed DAT. The errors on the color incongruent trials (along with the difference between congruent and incongruent trials), and changes in the reaction time distributions were the strongest predictors of conversion to DAT, consistent with recent arguments regarding the sensitivity of these measures. Notably in the psychometric measures, there was little evidence of a difference in declarative memory between converters and nonconverters, but there was some evidence of changes in visual-spatial processing. Discussion focuses on the accumulating evidence suggesting a role of attentional control mechanisms as an early marker for the transition from healthy cognitive aging to DAT. PMID:20230140

  19. Database applicaton for absolute spectrophotometry

    NASA Astrophysics Data System (ADS)

    Bochkov, Valery V.; Shumko, Sergiy

    2002-12-01

    32-bit database application with multidocument interface for Windows has been developed to calculate absolute energy distributions of observed spectra. The original database contains wavelength calibrated observed spectra which had been already passed through apparatus reductions such as flatfielding, background and apparatus noise subtracting. Absolute energy distributions of observed spectra are defined in unique scale by means of registering them simultaneously with artificial intensity standard. Observations of sequence of spectrophotometric standards are used to define absolute energy of the artificial standard. Observations of spectrophotometric standards are used to define optical extinction in selected moments. FFT algorithm implemented in the application allows performing convolution (deconvolution) spectra with user-defined PSF. The object-oriented interface has been created using facilities of C++ libraries. Client/server model with Windows Socket functionality based on TCP/IP protocol is used to develop the application. It supports Dynamic Data Exchange conversation in server mode and uses Microsoft Exchange communication facilities.

  20. An estimate of global absolute dynamic topography

    NASA Technical Reports Server (NTRS)

    Tai, C.-K.; Wunsch, C.

    1984-01-01

    The absolute dynamic topography of the world ocean is estimated from the largest scales to a short-wavelength cutoff of about 6700 km for the period July through September, 1978. The data base consisted of the time-averaged sea-surface topography determined by Seasat and geoid estimates made at the Goddard Space Flight Center. The issues are those of accuracy and resolution. Use of the altimetric surface as a geoid estimate beyond the short-wavelength cutoff reduces the spectral leakage in the estimated dynamic topography from erroneous small-scale geoid estimates without contaminating the low wavenumbers. Comparison of the result with a similarly filtered version of Levitus' (1982) historical average dynamic topography shows good qualitative agreement. There is quantitative disagreement, but it is within the estimated errors of both methods of calculation.

  1. Absolute classification with unsupervised clustering

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, D. A.

    1992-01-01

    An absolute classification algorithm is proposed in which the class definition through training samples or otherwise is required only for a particular class of interest. The absolute classification is considered as a problem of unsupervised clustering when one cluster is known initially. The definitions and statistics of the other classes are automatically developed through the weighted unsupervised clustering procedure, which is developed to keep the cluster corresponding to the class of interest from losing its identity as the class of interest. Once all the classes are developed, a conventional relative classifier such as the maximum-likelihood classifier is used in the classification.

  2. Relativistic Absolutism in Moral Education.

    ERIC Educational Resources Information Center

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  3. Absolute Standards for Climate Measurements

    NASA Astrophysics Data System (ADS)

    Leckey, J.

    2016-10-01

    In a world of changing climate, political uncertainty, and ever-changing budgets, the benefit of measurements traceable to SI standards increases by the day. To truly resolve climate change trends on a decadal time scale, on-orbit measurements need to be referenced to something that is both absolute and unchanging. One such mission is the Climate Absolute Radiance and Refractivity Observatory (CLARREO) that will measure a variety of climate variables with an unprecedented accuracy to definitively quantify climate change. In the CLARREO mission, we will utilize phase change cells in which a material is melted to calibrate the temperature of a blackbody that can then be observed by a spectrometer. A material's melting point is an unchanging physical constant that, through a series of transfers, can ultimately calibrate a spectrometer on an absolute scale. CLARREO consists of two primary instruments: an infrared (IR) spectrometer and a reflected solar (RS) spectrometer. The mission will contain orbiting radiometers with sufficient accuracy to calibrate other space-based instrumentation and thus transferring the absolute traceability. The status of various mission options will be presented.

  4. Impact of the glucocorticoid receptor BclI polymorphism on reward expectancy and prediction error related ventral striatal reactivity in depressed and healthy individuals.

    PubMed

    Ham, Byung-Joo; Greenberg, Tsafrir; Chase, Henry W; Phillips, Mary L

    2016-01-01

    There is evidence that reward-related neural reactivity is altered in depressive disorders. Glucocorticoids influence dopaminergic transmission, which is widely implicated in reward processing. However, no studies have examined the effect of glucocorticoid receptor gene polymorphisms on reward-related neural reactivity in depressed or healthy individuals. Fifty-nine depressed individuals with major depressive disorder (n=33) or bipolar disorder (n=26), and 32 healthy individuals were genotyped for the glucocorticoid receptor BclI G/C polymorphism, and underwent functional magnetic resonance imaging during a monetary reward task. We examined the effect of the glucocorticoid receptor BclI G/C polymorphism on reward expectancy (RE; expected outcome value) and prediction error (PE; discrepancy between expected and actual outcome) related ventral striatal reactivity. There was a significant interaction between reward condition and BclI genotype (p=0.007). C-allele carriers showed higher PE than RE-related right ventral striatal reactivity (p<0.001), whereas no such difference was observed in G/G homozygotes. Accordingly, C-allele carriers showed a greater difference between PE and RE-related right ventral striatal reactivity than G/G homozygotes (p<0.005), and also showed lower RE-related right ventral striatal reactivity than G/G homozygotes (p=0.011). These findings suggest a slowed transfer from PE to RE-related ventral striatal responses during reinforcement learning in C-allele carriers, regardless of diagnosis, possibly due to altered dopamine release associated with increased sensitivity to glucocorticoids.

  5. Absolute Stability Analysis of a Phase Plane Controlled Spacecraft

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Plummer, Michael; Bedrossian, Nazareth; Hall, Charles; Jackson, Mark; Spanos, Pol

    2010-01-01

    Many aerospace attitude control systems utilize phase plane control schemes that include nonlinear elements such as dead zone and ideal relay. To evaluate phase plane control robustness, stability margin prediction methods must be developed. Absolute stability is extended to predict stability margins and to define an abort condition. A constrained optimization approach is also used to design flex filters for roll control. The design goal is to optimize vehicle tracking performance while maintaining adequate stability margins. Absolute stability is shown to provide satisfactory stability constraints for the optimization.

  6. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  7. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  8. Mathematical Model for Absolute Magnetic Measuring Systems in Industrial Applications

    NASA Astrophysics Data System (ADS)

    Fügenschuh, Armin; Fügenschuh, Marzena; Ludszuweit, Marina; Mojsic, Aleksandar; Sokół, Joanna

    2015-09-01

    Scales for measuring systems are either based on incremental or absolute measuring methods. Incremental scales need to initialize a measurement cycle at a reference point. From there, the position is computed by counting increments of a periodic graduation. Absolute methods do not need reference points, since the position can be read directly from the scale. The positions on the complete scales are encoded using two incremental tracks with different graduation. We present a new method for absolute measuring using only one track for position encoding up to micrometre range. Instead of the common perpendicular magnetic areas, we use a pattern of trapezoidal magnetic areas, to store more complex information. For positioning, we use the magnetic field where every position is characterized by a set of values measured by a hall sensor array. We implement a method for reconstruction of absolute positions from the set of unique measured values. We compare two patterns with respect to uniqueness, accuracy, stability and robustness of positioning. We discuss how stability and robustness are influenced by different errors during the measurement in real applications and how those errors can be compensated.

  9. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  10. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  11. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  12. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  13. Accuracy of devices for self-monitoring of blood glucose: A stochastic error model.

    PubMed

    Vettoretti, M; Facchinetti, A; Sparacino, G; Cobelli, C

    2015-01-01

    Self-monitoring of blood glucose (SMBG) devices are portable systems that allow measuring glucose concentration in a small drop of blood obtained via finger-prick. SMBG measurements are key in type 1 diabetes (T1D) management, e.g. for tuning insulin dosing. A reliable model of SMBG accuracy would be important in several applications, e.g. in in silico design and optimization of insulin therapy. In the literature, the most used model to describe SMBG error is the Gaussian distribution, which however is simplistic to properly account for the observed variability. Here, a methodology to derive a stochastic model of SMBG accuracy is presented. The method consists in dividing the glucose range into zones in which absolute/relative error presents constant standard deviation (SD) and, then, fitting by maximum-likelihood a skew-normal distribution model to absolute/relative error distribution in each zone. The method was tested on a database of SMBG measurements collected by the One Touch Ultra 2 (Lifescan Inc., Milpitas, CA). In particular, two zones were identified: zone 1 (BG≤75 mg/dl) with constant-SD absolute error and zone 2 (BG>75mg/dl) with constant-SD relative error. Mean and SD of the identified skew-normal distributions are, respectively, 2.03 and 6.51 in zone 1, 4.78% and 10.09% in zone 2. Visual predictive check validation showed that the derived two-zone model accurately reproduces SMBG measurement error distribution, performing significantly better than the single-zone Gaussian model used previously in the literature. This stochastic model allows a more realistic SMBG scenario for in silico design and optimization of T1D insulin therapy.

  14. Individual Differences in Absolute and Relative Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Maki, Ruth H.; Shields, Micheal; Wheeler, Amanda Easton; Zacchilli, Tammy Lowery

    2005-01-01

    The authors investigated absolute and relative metacomprehension accuracy as a function of verbal ability in college students. Students read hard texts, revised texts, or a mixed set of texts. They then predicted their performance, took a multiple-choice test on the texts, and made posttest judgments about their performance. With hard texts,…

  15. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  16. Calibration method of absolute orientation of camera optical axis

    NASA Astrophysics Data System (ADS)

    Xu, Yong; Guo, Pengyu; Zhang, Xiaohu; Ding, Shaowen; Su, Ang; Li, Lichun

    2013-08-01

    Camera calibration is one of the most basic and important processes in optical measuring field. Generally, the objective of camera calibration is to estimate the internal and external parameters of object cameras, while the orientation error of optical axis is not included yet. Orientation error of optical axis is a important factor, which seriously affects measuring precision in high-precision measurement field, especially for those distant aerospace measurement in which object distance is much longer than focal length, that lead to magnifying the orientation errors to thousands times. In order to eliminate the influence of orientation error of camera optical axis, the imaging model of camera is analysed and established in this paper, and the calibration method is also introduced: Firstly, we analyse the reasons that cause optical axis error and its influence. Then, we find the model of optical axis orientation error and imaging model of camera basing on it's practical physical meaning. Furthermore, we derive the bundle adjustment algorithm which could compute the internal and external camera parameters and absolute orientation of camera optical axis simultaneously at high precision. In numeric simulation, we solve the camera parameters by using bundle adjustment optimization algorithm, then we correct the image points by calibration results according to the model of optical axis error, and the simulation result shows that our calibration model is reliable, effective and precise.

  17. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  18. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    SciTech Connect

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  19. Absolute, Extreme-Ultraviolet, Solar Spectral Irradiance Monitor (AESSIM)

    NASA Technical Reports Server (NTRS)

    Huber, Martin C. E.; Smith, Peter L.; Parkinson, W. H.; Kuehne, M.; Kock, M.

    1988-01-01

    AESSIM, the Absolute, Extreme-Ultraviolet, Solar Spectral Irradiance Monitor, is designed to measure the absolute solar spectral irradiance at extreme-ultraviolet (EUV) wavelengths. The data are required for studies of the processes that occur in the earth's upper atmosphere and for predictions of atmospheric drag on space vehicles. AESSIM is comprised of sun-pointed spectrometers and newly-developed, secondary standards of spectral irradiance for the EUV. Use of the in-orbit standard sources will eliminate the uncertainties caused by changes in spectrometer efficiency that have plagued all previous measurements of the solar spectral EUV flux.

  20. Absolute intensity and polarization of rotational Raman scattering from N2, O2, and CO2

    NASA Technical Reports Server (NTRS)

    Penney, C. M.; St.peters, R. L.; Lapp, M.

    1973-01-01

    An experimental examination of the absolute intensity, polarization, and relative line intensities of rotational Raman scattering (RRS) from N2, O2, and CO2 is reported. The absolute scattering intensity for N2 is characterized by its differential cross section for backscattering of incident light at 647.1 nm, which is calculated from basic measured values. The ratio of the corresponding cross section for O2 to that for N2 is 2.50 plus or minus 5 percent. The intensity recent for N2, O2, and CO2 are shown to compare favorably to values calculated from recent measurements of the depolarization of Rayleigh scattering plus RRS. Measured depolarizations of various RRS lines agree to within a few percent with the theoretical value of 3/4. Detailed error analyses are presented for intensity and depolarization measurements. Finally, extensive RRS spectra at nominal gas temperatures of 23 C, 75 C, and 125 C are presented and shown to compare favorably to theoretical predictions.

  1. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  2. Dynamic diagnostics of the error fields in tokamaks

    NASA Astrophysics Data System (ADS)

    Pustovitov, V. D.

    2007-07-01

    The error field diagnostics based on magnetic measurements outside the plasma is discussed. The analysed methods rely on measuring the plasma dynamic response to the finite-amplitude external magnetic perturbations, which are the error fields and the pre-programmed probing pulses. Such pulses can be created by the coils designed for static error field correction and for stabilization of the resistive wall modes, the technique developed and applied in several tokamaks, including DIII-D and JET. Here analysis is based on the theory predictions for the resonant field amplification (RFA). To achieve the desired level of the error field correction in tokamaks, the diagnostics must be sensitive to signals of several Gauss. Therefore, part of the measurements should be performed near the plasma stability boundary, where the RFA effect is stronger. While the proximity to the marginal stability is important, the absolute values of plasma parameters are not. This means that the necessary measurements can be done in the diagnostic discharges with parameters below the nominal operating regimes, with the stability boundary intentionally lowered. The estimates for ITER are presented. The discussed diagnostics can be tested in dedicated experiments in existing tokamaks. The diagnostics can be considered as an extension of the 'active MHD spectroscopy' used recently in the DIII-D tokamak and the EXTRAP T2R reversed field pinch.

  3. Error bounds in cascading regressions

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.

  4. Absolute calibration of forces in optical tweezers

    NASA Astrophysics Data System (ADS)

    Dutra, R. S.; Viana, N. B.; Maia Neto, P. A.; Nussenzveig, H. M.

    2014-07-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past 15 years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spot, adapting frequently employed video microscopy techniques. Combined with interface spherical aberration, it reveals a previously unknown window of instability for trapping. Comparison with experimental data leads to an overall agreement within error bars, with no fitting, for a broad range of microsphere radii, from the Rayleigh regime to the ray optics one, for different polarizations and trapping heights, including all commonly employed parameter domains. Besides signaling full first-principles theoretical understanding of optical tweezers operation, the results may lead to improved instrument design and control over experiments, as well as to an extended domain of applicability, allowing reliable force measurements, in principle, from femtonewtons to nanonewtons.

  5. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < -1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  6. Toward a cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  7. Absolute bioavailability of quinine formulations in Nigeria.

    PubMed

    Babalola, C P; Bolaji, O O; Ogunbona, F A; Ezeomah, E

    2004-09-01

    This study compared the absolute bioavailability of quinine sulphate as capsule and as tablet against the intravenous (i.v.) infusion of the drug in twelve male volunteers. Six of the volunteers received intravenous infusion over 4 h as well as the capsule formulation of the drug in a cross-over manner, while the other six received the tablet formulation. Blood samples were taken at predetermined time intervals and plasma analysed for quinine (QN) using reversed-phase HPLC method. QN was rapidly absorbed after the two oral formulations with average t(max) of 2.67 h for both capsule and tablet. The mean elimination half-life of QN from the i.v. and oral dosage forms varied between 10 and 13.5 hr and were not statistically different (P > 0.05). On the contrary, the maximum plasma concentration (C(max)) and area under the curve (AUC) from capsule were comparable to those from i.v. (P > 0.05), while these values were markedly higher than values from tablet formulation (P < 0.05). The therapeutic QN plasma levels were not achieved with the tablet formulation. The absolute bioavailability (F) were 73% (C.l., 53.3 - 92.4%) and 39 % (C.I., 21.7 - 56.6%) for the capsule and tablet respectively and the difference was significant (P < 0.05). The subtherapeutic levels obtained from the tablet form used in this study may cause treatment failure during malaria and caution should be taken when predictions are made from results obtained from different formulations of QN.

  8. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  9. Automated absolute phase retrieval in across-track interferometry

    NASA Technical Reports Server (NTRS)

    Madsen, Soren N.; Zebker, Howard A.

    1992-01-01

    Discussed is a key element in the processing of topographic radar maps acquired by the NASA/JPL airborne synthetic aperture radar configured as an across-track interferometer (TOPSAR). TOPSAR utilizes a single transmit and two receive antennas; the three-dimensional target location is determined by triangulation based on a known baseline and two measured slant ranges. The slant range difference is determined very accurately from the phase difference between the signals received by the two antennas. This phase is measured modulo 2pi, whereas it is the absolute phase which relates directly to the difference in slant range. It is shown that splitting the range bandwidth into two subbands in the processor and processing each individually allows for the absolute phase. The underlying principles and system errors which must be considered are discussed, together with the implementation and results from processing data acquired during the summer of 1991.

  10. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  11. Single-track absolute position encoding method based on spatial frequency of stripes

    NASA Astrophysics Data System (ADS)

    Xiang, Xiansong; Lu, Yancong; Wei, Chunlong; Zhou, Changhe

    2014-11-01

    A new method of single-track absolute position encoding based on spatial frequency of stripes is proposed. Instead of using pseudorandom-sequence arranged stripes as in conventional situations, this kind of encoding method stores the location information in the frequency space of the stripes, which means the spatial frequency of stripes varies with position and indicates position. This encoding method has a strong fault-tolerant capability with single-stripe detecting errors. The method can be applied to absolute linear encoders, absolute photoelectric angle encoders or two-dimensional absolute linear encoders. The measuring apparatus includes a CCD image sensor and a microscope system, and the method of decoding this frequency code is based on FFT algorithm. This method should be highly interesting for practical applications as an absolute position encoding method.

  12. The EM-POGO: A simple, absolute velocity profiler

    NASA Astrophysics Data System (ADS)

    Terker, S. R.; Sanford, T. B.; Dunlap, J. H.; Girton, J. B.

    2013-01-01

    Electromagnetic current instrumentation has been added to the Bathy Systems, Inc. POGO transport sondes to produce a free-falling absolute velocity profiler called EM-POGO. The POGO is a free-fall profiler that measures a depth-averaged velocity using GPS fixes at the beginning and end of a round trip to the ocean floor (or a preset depth). The EM-POGO adds a velocity profile determined from measurements of motionally induced electric fields generated by the ocean current moving through the vertical component of the Earth's magnetic field. In addition to providing information about the vertical structure of the velocity, the depth-dependent measurements improve transport measurements by correcting for the non-constant fall-rate. Neglecting the variable fall rate results in errors O (1 cm s-1). The transition from POGO to EM-POGO included electrically isolating the POGO and electric-field-measuring circuits, installing a functional GPS receiver, finding a pressure case that provided an optimal balance among crush-depth, price and size, and incorporating the electrodes, electrode collar, and the circuitry required for the electric field measurement. The first EM-POGO sea-trial was in July 1999. In August 2006 a refurbished EM-POGO collected 15 absolute velocity profiles; relative and absolute velocity uncertainty was ˜1cms-1 and 0.5-5 cm s-1, respectively, at a vertical resolution of 25 m. Absolute velocity from the EM-POGO compared to shipboard ADCP measurements differed by ˜ 1-2 cm s-1, comparable to the uncertainty in absolute velocity from the ADCP. The EM-POGO is thus a low-cost, easy to deploy and recover, and accurate velocity profiler.

  13. Ability of the planar spring-mass model to predict mechanical parameters in running humans.

    PubMed

    Bullimore, Sharon R; Burn, Jeremy F

    2007-10-21

    The planar spring-mass model is a simple mathematical model of bouncing gaits, such as running, trotting and hopping. Although this model has been widely used in the study of locomotion, its accuracy in predicting locomotor mechanics has not been systematically quantified. We determined the percent error of the model in predicting 10 locomotor parameters in running humans by comparing the model predictions to experimental data from humans running in normal gravity and simulated reduced gravity. We tested the hypotheses that the model would overestimate horizontal impulse and the change in mechanical energy of the centre of mass (COM) during stance. The model provided good predictions of stance time, vertical impulse, contact length, duty factor, relative stride length and relative peak force. All predictions of these parameters were within 20% of measured values and at least 90% of predictions of each parameter were within 10% of measured values (median absolute errors: <7%). This suggests that the model incorporates all features of running humans that have a significant influence upon these six parameters. As simulated gravity level decreased, the magnitude of the errors in predicting each of these parameters either decreased or stayed constant, indicating that this is a good model of running in simulated reduced gravity. As hypothesised, horizontal impulse and change in mechanical energy of the COM during stance were overestimated (median absolute errors: 43.6% and 26.2%, respectively). Aerial time and peak vertical COM displacement during stance were also systematically overestimated (median absolute errors: 17.7% and 22.9%, respectively). Care should be taken to ensure that the model is used only to investigate parameters which it can predict accurately. It would be useful to extend this analysis to other species and gaits.

  14. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  15. On the Absolute Age of the Metal-rich Globular M71 (NGC 6838). I. Optical Photometry

    NASA Astrophysics Data System (ADS)

    Di Cecco, A.; Bono, G.; Prada Moroni, P. G.; Tognelli, E.; Allard, F.; Stetson, P. B.; Buonanno, R.; Ferraro, I.; Iannicola, G.; Monelli, M.; Nonino, M.; Pulone, L.

    2015-08-01

    We investigated the absolute age of the Galactic globular cluster M71 (NGC 6838) using optical ground-based images (u\\prime ,g\\prime ,r\\prime ,i\\prime ,z\\prime ) collected with the MegaCam camera at the Canada-France-Hawaii Telescope (CFHT). We performed a robust selection of field and cluster stars by applying a new method based on the 3D (r\\prime ,u\\prime -g\\prime ,g\\prime -r\\prime ) color-color-magnitude diagram. A comparison between the color-magnitude diagram (CMD) of the candidate cluster stars and a new set of isochrones at the locus of the main sequence turn-off (MSTO) suggests an absolute age of 12 ± 2 Gyr. The absolute age was also estimated using the difference in magnitude between the MSTO and the so-called main sequence knee, a well-defined bending occurring in the lower main sequence. This feature was originally detected in the near-infrared bands and explained as a consequence of an opacity mechanism (collisionally induced absorption of molecular hydrogen) in the atmosphere of cool low-mass stars. The same feature was also detected in the r‧, u\\prime -g\\prime , and in the r\\prime ,g\\prime -r\\prime CMD, thus supporting previous theoretical predictions by Borysow et al. The key advantage in using the {{{Δ }}}{TO}{Knee} as an age diagnostic is that it is independent of uncertainties affecting the distance, the reddening, and the photometric zero point. We found an absolute age of 12 ± 1 Gyr that agrees, within the errors, with similar age estimates, but the uncertainty is on average a factor of two smaller. We also found that the {{{Δ }}}{TO}{Knee} is more sensitive to the metallicity than the MSTO, but the dependence vanishes when using the difference in color between the MSK and the MSTO.

  16. Pole coordinates data prediction by combination of least squares extrapolation and double autoregressive prediction

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw

    2016-04-01

    Future Earth Orientation Parameters data are needed to compute real time transformation between the celestial and terrestrial reference frames. This transformation is realized by predictions of x, y pole coordinates data, UT1-UTC data and precesion-nutation extrapolation model. This paper is focused on the pole coordinates data prediction by combination of the least-squares (LS) extrapolation and autoregressive (AR) prediction models (LS+AR). The AR prediction which is applied to the LS extrapolation residuals of pole coordinates data does not able to predict all frequency bands of them and it is mostly tuned to predict subseasonal oscillations. The absolute values of differences between pole coordinates data and their LS+AR predictions increase with prediction length and depend mostly on starting prediction epochs, thus time series of these differences for 2, 4 and 8 weeks in the future were analyzed. Time frequency spectra of these differences for different prediction lengths are very similar showing some power in the frequency band corresponding to the prograde Chandler and annual oscillations, which means that the increase of prediction errors is caused by mismodelling of these oscillations by the LS extrapolation model. Thus, the LS+AR prediction method can be modified by taking into additional AR prediction correction computed from time series of these prediction differences for different prediction lengths. This additional AR prediction is mostly tuned to the seasonal frequency band of pole coordinates data.

  17. Drifting from Slow to "D'oh!": Working Memory Capacity and Mind Wandering Predict Extreme Reaction Times and Executive Control Errors

    ERIC Educational Resources Information Center

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Suss, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects' WMC and their performance and mind-wandering rates…

  18. Absolute configuration determination using enantiomeric pairs of molecularly imprinted polymers.

    PubMed

    Meador, Danielle S; Spivak, David A

    2014-03-07

    A new method for determination of absolute configuration (AC) is demonstrated using an enantiomeric pair of molecularly imprinted polymers, referred to as "DuoMIPs". The ratio of HPLC capacity factors (k') for the analyte on each of the DuoMIPs is defined as the γ factor and can be used to determine AC when above 1.2. A mnemonic based on the complementary binding geometry of the DuoMIPs was used to aid in understanding and prediction of AC.

  19. The use of X-ray crystallography to determine absolute configuration.

    PubMed

    Flack, H D; Bernardinelli, G

    2008-05-15

    Essential background on the determination of absolute configuration by way of single-crystal X-ray diffraction (XRD) is presented. The use and limitations of an internal chiral reference are described. The physical model underlying the Flack parameter is explained. Absolute structure and absolute configuration are defined and their similarities and differences are highlighted. The necessary conditions on the Flack parameter for satisfactory absolute-structure determination are detailed. The symmetry and purity conditions for absolute-configuration determination are discussed. The physical basis of resonant scattering is briefly presented and the insights obtained from a complete derivation of a Bijvoet intensity ratio by way of the mean-square Friedel difference are exposed. The requirements on least-squares refinement are emphasized. The topics of right-handed axes, XRD intensity measurement, software, crystal-structure evaluation, errors in crystal structures, and compatibility of data in their relation to absolute-configuration determination are described. Characterization of the compounds and crystals by the physicochemical measurement of optical rotation, CD spectra, and enantioselective chromatography are presented. Some simple and some complex examples of absolute-configuration determination using combined XRD and CD measurements, using XRD and enantioselective chromatography, and in multiply-twinned crystals clarify the technique. The review concludes with comments on absolute-configuration determination from light-atom structures.

  20. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  1. ON A SUFFICIENT CONDITION FOR ABSOLUTE CONTINUITY.

    DTIC Science & Technology

    The formulation of a condition which yields absolute continuity when combined with continuity and bounded variation is the problem considered in the...Briefly, the formulation is achieved through a discussion which develops a proof by contradiction of a sufficiently theorem for absolute continuity which uses in its hypothesis the condition of continuity and bounded variation .

  2. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  3. Monolithically integrated absolute frequency comb laser system

    SciTech Connect

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  4. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  5. A mathematical biologist's guide to absolute and convective instability.

    PubMed

    Sherratt, Jonathan A; Dagbovie, Ayawoa S; Hilker, Frank M

    2014-01-01

    Mathematical models have been highly successful at reproducing the complex spatiotemporal phenomena seen in many biological systems. However, the ability to numerically simulate such phenomena currently far outstrips detailed mathematical understanding. This paper reviews the theory of absolute and convective instability, which has the potential to redress this inbalance in some cases. In spatiotemporal systems, unstable steady states subdivide into two categories. Those that are absolutely unstable are not relevant in applications except as generators of spatial or spatiotemporal patterns, but convectively unstable steady states can occur as persistent features of solutions. The authors explain the concepts of absolute and convective instability, and also the related concepts of remnant and transient instability. They give examples of their use in explaining qualitative transitions in solution behaviour. They then describe how to distinguish different types of instability, focussing on the relatively new approach of the absolute spectrum. They also discuss the use of the theory for making quantitative predictions on how spatiotemporal solutions change with model parameters. The discussion is illustrated throughout by numerical simulations of a model for river-based predator-prey systems.

  6. Absolute length measurement using manually decided stereo correspondence for endoscopy

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Koishi, T.; Nakaguchi, T.; Tsumura, N.; Miyake, Y.

    2009-02-01

    In recent years, various kinds of endoscope have been developed and widely used to endoscopic biopsy, endoscopic operation and endoscopy. The size of the inflammatory part is important to determine a method of medical treatment. However, it is not easy to measure absolute size of inflammatory part such as ulcer, cancer and polyp from the endoscopic image. Therefore, it is required measuring the size of those part in endoscopy. In this paper, we propose a new method to measure the absolute length in a straight line between arbitrary two points based on the photogrammetry using endoscope with magnetic tracking sensor which gives camera position and angle. In this method, the stereo-corresponding points between two endoscopic images are determined by the endoscopist without any apparatus of projection and calculation to find the stereo correspondences, then the absolute length can be calculated on the basis of the photogrammetry. The evaluation experiment using a checkerboard showed that the errors of the measurements are less than 2% of the target length when the baseline is sufficiently-long.

  7. On the influence of the rotation of a corner cube reflector in absolute gravimetry

    NASA Astrophysics Data System (ADS)

    Rothleitner, Ch; Francis, O.

    2010-10-01

    Test masses of absolute gravimeters contain prism or hollow retroreflectors. A rotation of such a retroreflector during free-fall can cause a bias in the measured g-value. In particular, prism retroreflectors produce phase shifts, which cannot be eliminated. Such an error is small if the rotation occurs about the optical centre of the retroreflector; however, under certain initial conditions the error can reach the microgal level. The contribution from these rotation-induced accelerations is calculated.

  8. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  9. Absolute efficiency measurements with the 10B based Jalousie detector

    NASA Astrophysics Data System (ADS)

    Modzel, G.; Henske, M.; Houben, A.; Klein, M.; Köhli, M.; Lennert, P.; Meven, M.; Schmidt, C. J.; Schmidt, U.; Schweika, W.

    2014-04-01

    The 10B based Jalousie detector is a replacement for 3He counter tubes, which are nowadays less affordable for large area detectors due to the 3He crisis. In this paper we investigate and verify the performance of the new 10B based detector concept and its adoption for the POWTEX diffractometer, which is designed for the detection of thermal neutrons with predicted detection efficiencies of 75-50% for neutron energies of 10-100 meV, respectively. The predicted detection efficiency has been verified by absolute measurements using neutrons with a wavelength of 1.17 Å (59 meV).

  10. The absolute energy flux envelopes of B type stars.

    NASA Technical Reports Server (NTRS)

    Underhill, A. B.

    1972-01-01

    Absolute energy flux envelopes covering the region of 1100 to 6000 A for main-sequence stars of types B3, B7 and A0 derived from published, ground-based observations and from spectrum scans with OAO-II are presented. These flux envelopes are compared with the predicted flux envelopes from lightly line-blanketed model atmospheres. The line blanketing at wavelengths shorter than 3000 A is severe, about one-half the predicted light being observed at 1600 A. These results demonstrate that a model which represents well the observed visible spectrum of a star may fail seriously for representing the ultraviolet spectrum.

  11. Automatic section thickness determination using an absolute gradient focus function.

    PubMed

    Elozory, D T; Kramer, K A; Chaudhuri, B; Bonam, O P; Goldgof, D B; Hall, L O; Mouton, P R

    2012-12-01

    Quantitative analysis of microstructures using computerized stereology systems is an essential tool in many disciplines of bioscience research. Section thickness determination in current nonautomated approaches requires manual location of upper and lower surfaces of tissue sections. In contrast to conventional autofocus functions that locate the optimally focused optical plane using the global maximum on a focus curve, this study identified by two sharp 'knees' on the focus curve as the transition from unfocused to focused optical planes. Analysis of 14 grey-scale focus functions showed, the thresholded absolute gradient function, was best for finding detectable bends that closely correspond to the bounding optical planes at the upper and lower tissue surfaces. Modifications to this function generated four novel functions that outperformed the original. The 'modified absolute gradient count' function outperformed all others with an average error of 0.56 μm on a test set of images similar to the training set; and, an average error of 0.39 μm on a test set comprised of images captured from a different case, that is, different staining methods on a different brain region from a different subject rat. We describe a novel algorithm that allows for automatic section thickness determination based on just out-of-focus planes, a prerequisite for fully automatic computerized stereology.

  12. Absolute quantitation of protein posttranslational modification isoform.

    PubMed

    Yang, Zhu; Li, Ning

    2015-01-01

    Mass spectrometry has been widely applied in characterization and quantification of proteins from complex biological samples. Because the numbers of absolute amounts of proteins are needed in construction of mathematical models for molecular systems of various biological phenotypes and phenomena, a number of quantitative proteomic methods have been adopted to measure absolute quantities of proteins using mass spectrometry. The liquid chromatography-tandem mass spectrometry (LC-MS/MS) coupled with internal peptide standards, i.e., the stable isotope-coded peptide dilution series, which was originated from the field of analytical chemistry, becomes a widely applied method in absolute quantitative proteomics research. This approach provides more and more absolute protein quantitation results of high confidence. As quantitative study of posttranslational modification (PTM) that modulates the biological activity of proteins is crucial for biological science and each isoform may contribute a unique biological function, degradation, and/or subcellular location, the absolute quantitation of protein PTM isoforms has become more relevant to its biological significance. In order to obtain the absolute cellular amount of a PTM isoform of a protein accurately, impacts of protein fractionation, protein enrichment, and proteolytic digestion yield should be taken into consideration and those effects before differentially stable isotope-coded PTM peptide standards are spiked into sample peptides have to be corrected. Assisted with stable isotope-labeled peptide standards, the absolute quantitation of isoforms of posttranslationally modified protein (AQUIP) method takes all these factors into account and determines the absolute amount of a protein PTM isoform from the absolute amount of the protein of interest and the PTM occupancy at the site of the protein. The absolute amount of the protein of interest is inferred by quantifying both the absolute amounts of a few PTM

  13. Analysis of absolute flatness testing in sub-stitching interferometer

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen

    2016-09-01

    Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. When the testing accuracy (repeatability and reproducibility) is close to 1nm, in addition to the reference surface, other factors will also affect the measuring accuracy such as environment, zoom magnification, stitching precision, tooling and fixture, the characteristics of optical materials and so on. In the thousand level cleanroom, we establish a good environment system. Long time stability, temperature controlled at 22°+/-0.02°.The humidity and noise are controlled in a certain range. We establish a stitching system in the clean room. The vibration testing system is used to test the vibration. The air pressure testing system is also used. In the motion system, we control the tilt error no more than 4 second to reduce the error. The angle error can be tested by the autocollimator and double grating reading head.

  14. Absolute realization of low BRDF value

    NASA Astrophysics Data System (ADS)

    Liu, Zilong; Liao, Ningfang; Li, Ping; Wang, Yu

    2010-10-01

    Low BRDF value is widespread used in many critical domains such as space and military fairs. These values below 0.1 Sr-1 . So the Absolute realization of these value is the most critical issue in the absolute measurement of BRDF. To develop the Absolute value realization theory of BRDF , defining an arithmetic operators of BRDF , achieving an absolute measurement Eq. of BRDF based on radiance. This is a new theory method to solve the realization problem of low BRDF value. This theory method is realized on a self-designed common double orientation structure in space. By designing an adding structure to extend the range of the measurement system and a control and processing software, Absolute realization of low BRDF value is achieved. A material of low BRDF value is measured in this measurement system and the spectral BRDF value are showed within different angles allover the space. All these values are below 0.4 Sr-1 . This process is a representative procedure about the measurement of low BRDF value. A corresponding uncertainty analysis of this measurement data is given depend on the new theory of absolute realization and the performance of the measurement system. The relative expand uncertainty of the measurement data is 0.078. This uncertainty analysis is suitable for all measurements using the new theory of absolute realization and the corresponding measurement system.

  15. Networks of Absolute Calibration Stars for SST, AKARI, and WISE

    NASA Astrophysics Data System (ADS)

    Cohen, M.

    2007-04-01

    I describe the Cohen-Walker-Witteborn (CWW) network of absolute calibration stars built to support ground-based, airborne, and space-based sensors, and how they are used to calibrate instruments on the SPITZER Space Telescope (SST and Japan's AKARI (formerly ASTRO-F), and to support NASA's planned MidEx WISE (the Wide-field Infrared Survey Explorer). All missions using this common calibration share a self-consistent framework embracing photometry and low-resolution spectroscopy. CWW also underpins COBE/DIRBE several instruments used on the Kuiper Airborne Observatory ({KAO}), the joint Japan-USA ``IR Telescope in Space" (IRTS) Near-IR and Mid-IR spectrometers, the European Space Agency's IR Space Observatory (ISO), and the US Department of Defense's Midcourse Space eXperiment (MSX). This calibration now spans the far-UV to mid-infrared range with Sirius (one specific Kurucz synthetic spectrum) as basis, and zero magnitude defined from another Kurucz spectrum intended to represent an ideal Vega (not the actual star with its pole-on orientation and mid-infrared dust excess emission). Precision 4-29 μm radiometric measurements on MSX validate CWW's absolute Kurucz spectrum of Sirius, the primary, and a set of bright K/MIII secondary standards. Sirius is measured to be 1.0% higher than predicted. CWW's definitions of IR zero magnitudes lie within 1.1% absolute of MSX measurements. The US Air Force Research Laboratory's independent analysis of on-orbit {MSX} stellar observations compared with emissive reference spheres show CWW primary and empirical secondary spectra lie well within the ±1.45% absolute uncertainty associated with this 15-year effort. Our associated absolute calibration for the InfraRed Array Camera (IRAC) on the SST lies within ˜2% of the recent extension of the calibration of the Hubble Space Telescope's STIS instrument to NICMOS (Bohlin, these Proceedings), showing the closeness of these two independent approaches to calibration.

  16. Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.

    PubMed

    Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong

    2016-03-20

    A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing.

  17. The importance and attainment of accurate absolute radiometric calibration

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1984-01-01

    The importance of accurate absolute radiometric calibration is discussed by reference to the needs of those wishing to validate or use models describing the interaction of electromagnetic radiation with the atmosphere and earth surface features. The in-flight calibration methods used for the Landsat Thematic Mapper (TM) and the Systeme Probatoire d'Observation de la Terre, Haute Resolution visible (SPOT/HRV) systems are described and their limitations discussed. The questionable stability of in-flight absolute calibration methods suggests the use of a radiative transfer program to predict the apparent radiance, at the entrance pupil of the sensor, of a ground site of measured reflectance imaged through a well characterized atmosphere. The uncertainties of such a method are discussed.

  18. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  19. Magnifying absolute instruments for optically homogeneous regions

    SciTech Connect

    Tyc, Tomas

    2011-09-15

    We propose a class of magnifying absolute optical instruments with a positive isotropic refractive index. They create magnified stigmatic images, either virtual or real, of optically homogeneous three-dimensional spatial regions within geometrical optics.

  20. The Simplicity Argument and Absolute Morality

    ERIC Educational Resources Information Center

    Mijuskovic, Ben

    1975-01-01

    In this paper the author has maintained that there is a similarity of thought to be found in the writings of Cudworth, Emerson, and Husserl in his investigation of an absolute system of morality. (Author/RK)

  1. Absolute Radiometric Calibration of KOMPSAT-3A

    NASA Astrophysics Data System (ADS)

    Ahn, H. Y.; Shin, D. Y.; Kim, J. S.; Seo, D. C.; Choi, C. U.

    2016-06-01

    This paper presents a vicarious radiometric calibration of the Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) performed by the Korea Aerospace Research Institute (KARI) and the Pukyong National University Remote Sensing Group (PKNU RSG) in 2015.The primary stages of this study are summarized as follows: (1) A field campaign to determine radiometric calibrated target fields was undertaken in Mongolia and South Korea. Surface reflectance data obtained in the campaign were input to a radiative transfer code that predicted at-sensor radiance. Through this process, equations and parameters were derived for the KOMPSAT-3A sensor to enable the conversion of calibrated DN to physical units, such as at-sensor radiance or TOA reflectance. (2) To validate the absolute calibration coefficients for the KOMPSAT-3A sensor, we performed a radiometric validation with a comparison of KOMPSAT-3A and Landsat-8 TOA reflectance using one of the six PICS (Libya 4). Correlations between top-of-atmosphere (TOA) radiances and the spectral band responses of the KOMPSAT-3A sensors at the Zuunmod, Mongolia and Goheung, South Korea sites were significant for multispectral bands. The average difference in TOA reflectance between KOMPSAT-3A and Landsat-8 image over the Libya 4, Libya site in the red-green-blue (RGB) region was under 3%, whereas in the NIR band, the TOA reflectance of KOMPSAT-3A was lower than the that of Landsat-8 due to the difference in the band passes of two sensors. The KOMPSAT-3Aensor includes a band pass near 940 nm that can be strongly absorbed by water vapor and therefore displayed low reflectance. Toovercome this, we need to undertake a detailed analysis using rescale methods, such as the spectral bandwidth adjustment factor.

  2. Pantomime-Grasping: Advance Knowledge of Haptic Feedback Availability Supports an Absolute Visuo-Haptic Calibration

    PubMed Central

    Davarpanah Jazi, Shirin; Heath, Matthew

    2016-01-01

    An emerging issue in movement neurosciences is whether haptic feedback influences the nature of the information supporting a simulated grasping response (i.e., pantomime-grasping). In particular, recent work by our group contrasted pantomime-grasping responses performed with (i.e., PH+ trials) and without (i.e., PH− trials) terminal haptic feedback in separate blocks of trials. Results showed that PH− trials were mediated via relative visual information. In contrast, PH+ trials showed evidence of an absolute visuo-haptic calibration—a finding attributed to an error signal derived from a comparison between expected and actual haptic feedback (i.e., an internal forward model). The present study examined whether advanced knowledge of haptic feedback availability influences the aforementioned calibration process. To that end, PH− and PH+ trials were completed in separate blocks (i.e., the feedback schedule used in our group’s previous study) and a block wherein PH− and PH+ trials were randomly interleaved on a trial-by-trial basis (i.e., random feedback schedule). In other words, the random feedback schedule precluded participants from predicting whether haptic feedback would be available at the movement goal location. We computed just-noticeable-difference (JND) values to determine whether responses adhered to, or violated, the relative psychophysical principles of Weber’s law. Results for the blocked feedback schedule replicated our group’s previous work, whereas in the random feedback schedule PH− and PH+ trials were supported via relative visual information. Accordingly, we propose that a priori knowledge of haptic feedback is necessary to support an absolute visuo-haptic calibration. Moreover, our results demonstrate that the presence and expectancy of haptic feedback is an important consideration in contrasting the behavioral and neural properties of natural and simulated grasping. PMID:27199718

  3. Pantomime-Grasping: Advance Knowledge of Haptic Feedback Availability Supports an Absolute Visuo-Haptic Calibration.

    PubMed

    Davarpanah Jazi, Shirin; Heath, Matthew

    2016-01-01

    An emerging issue in movement neurosciences is whether haptic feedback influences the nature of the information supporting a simulated grasping response (i.e., pantomime-grasping). In particular, recent work by our group contrasted pantomime-grasping responses performed with (i.e., PH+ trials) and without (i.e., PH- trials) terminal haptic feedback in separate blocks of trials. Results showed that PH- trials were mediated via relative visual information. In contrast, PH+ trials showed evidence of an absolute visuo-haptic calibration-a finding attributed to an error signal derived from a comparison between expected and actual haptic feedback (i.e., an internal forward model). The present study examined whether advanced knowledge of haptic feedback availability influences the aforementioned calibration process. To that end, PH- and PH+ trials were completed in separate blocks (i.e., the feedback schedule used in our group's previous study) and a block wherein PH- and PH+ trials were randomly interleaved on a trial-by-trial basis (i.e., random feedback schedule). In other words, the random feedback schedule precluded participants from predicting whether haptic feedback would be available at the movement goal location. We computed just-noticeable-difference (JND) values to determine whether responses adhered to, or violated, the relative psychophysical principles of Weber's law. Results for the blocked feedback schedule replicated our group's previous work, whereas in the random feedback schedule PH- and PH+ trials were supported via relative visual information. Accordingly, we propose that a priori knowledge of haptic feedback is necessary to support an absolute visuo-haptic calibration. Moreover, our results demonstrate that the presence and expectancy of haptic feedback is an important consideration in contrasting the behavioral and neural properties of natural and simulated grasping.

  4. Absolute cross sections of compound nucleus reactions

    NASA Astrophysics Data System (ADS)

    Capurro, O. A.

    1993-11-01

    The program SEEF is a Fortran IV computer code for the extraction of absolute cross sections of compound nucleus reactions. When the evaporation residue is fed by its parents, only cumulative cross sections will be obtained from off-line gamma ray measurements. But, if one has the parent excitation function (experimental or calculated), this code will make it possible to determine absolute cross sections of any exit channel.

  5. Kelvin and the absolute temperature scale

    NASA Astrophysics Data System (ADS)

    Erlichson, Herman

    2001-07-01

    This paper describes the absolute temperature scale of Kelvin (William Thomson). Kelvin found that Carnot's axiom about heat being a conserved quantity had to be abandoned. Nevertheless, he found that Carnot's fundamental work on heat engines was correct. Using the concept of a Carnot engine Kelvin found that Q1/Q2 = T1/T2. Thermometers are not used to obtain absolute temperatures since they are calculated temperatures.

  6. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  7. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  8. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  9. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.; ,

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  10. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  11. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed.

  12. Neurophysiological responses to gun-shooting errors.

    PubMed

    Xu, Xiaowen; Inzlicht, Michael

    2015-03-01

    The present study investigated the neural responses to errors in a shooting game - and how these neural responses may relate to behavioral performance - by examining the ERP components related to error detection (error-related negativity; ERN) and error awareness (error-related positivity; Pe). The participants completed a Shooter go/no-go task, which required them to shoot at armed targets using a gaming gun, and avoid shooting innocent non-targets. The amplitude of the ERN and Pe was greater for shooting errors than correct shooting responses. The ERN and Pe amplitudes elicited by incorrect shooting appeared to have good internal reliability. The ERN and Pe amplitudes elicited by shooting behaviors also predicted better behavioral sensitivity towards shoot/don't-shoot stimuli. These results suggest that it is possible to obtain online brain response measures to shooting responses and that neural responses to shooting are predictive of behavioral responses.

  13. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  14. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    NASA Astrophysics Data System (ADS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-02-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results.

  15. On the convective-absolute nature of river bedform instabilities

    NASA Astrophysics Data System (ADS)

    Vesipa, Riccardo; Camporeale, Carlo; Ridolfi, Luca; Chomaz, Jean Marc

    2014-12-01

    River dunes and antidunes are induced by the morphological instability of stream-sediment boundary. Such bedforms raise a number of subtle theoretical questions and are crucial for many engineering and environmental problems. Despite their importance, the absolute/convective nature of the instability has never been addressed. The present work fills this gap as we demonstrate, by the cusp map method, that dune instability is convective for all values of the physical control parameters, while the antidune instability exhibits both behaviors. These theoretical predictions explain some previous experimental and numerical observations and are important to correctly plan flume experiments, numerical simulations, paleo-hydraulic reconstructions, and river works.

  16. Absolute Calibration of the Magnetic Field Measurement for Muon g-2

    NASA Astrophysics Data System (ADS)

    Farooq, Midhat; Chupp, Tim; Muon g-2 Collaboration Collaboration

    2017-01-01

    The muon g-2 experiment at Fermilab (E989) investigates the >3- σ discrepancy between the standard model prediction and the current experimental measurement of the muon magnetic moment anomaly, aμ = (g-2)/2. The effort requires a precise measurement of the 1.45 T magnetic field of the muon storage ring to 70 ppb. The final measurement will employ multiple absolute calibration probes: two water probes and a 3He probe. The 3He probe offers a cross-check of the water probes with different systematic corrections, adding a level of confidence to the measurement. A low-field 3He probe was developed at the Univ. of Michigan by employing a method called MEOP for the hyper-polarization of 3He gas, followed by NMR to determine the frequency proportional to the magnetic field in which the probe is placed. A modified probe design for operation under high fields will be tested at Argonne National Lab. Future development also involves the study of the systematic uncertainties to attain the error budget of <30 ppb for the calibration. Next, the calibration from the probes will be transferred to g-2 through several steps of a calibration chain ending in the final step of calibrating the NMR probes which measure the field in the muon storage ring at Fermilab. NSF PHY-1506021.

  17. Quantitative standards for absolute linguistic universals.

    PubMed

    Piantadosi, Steven T; Gibson, Edward

    2014-01-01

    Absolute linguistic universals are often justified by cross-linguistic analysis: If all observed languages exhibit a property, the property is taken to be a likely universal, perhaps specified in the cognitive or linguistic systems of language learners and users. In many cases, these patterns are then taken to motivate linguistic theory. Here, we show that cross-linguistic analysis will very rarely be able to statistically justify absolute, inviolable patterns in language. We formalize two statistical methods--frequentist and Bayesian--and show that in both it is possible to find strict linguistic universals, but that the numbers of independent languages necessary to do so is generally unachievable. This suggests that methods other than typological statistics are necessary to establish absolute properties of human language, and thus that many of the purported universals in linguistics have not received sufficient empirical justification.

  18. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  19. Full field imaging based instantaneous hyperspectral absolute refractive index measurement

    SciTech Connect

    Baba, Justin S; Boudreaux, Philip R

    2012-01-01

    Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.

  20. Absolute positioning using DORIS tracking of the SPOT-2 satellite

    NASA Technical Reports Server (NTRS)

    Watkins, M. M.; Ries, J. C.; Davis, G. W.

    1992-01-01

    The ability of the French DORIS system operating on the SPOT-2 satellite to provide absolute site positioning at the 20-30-centimeter level using 80 d of data is demonstrated. The accuracy of the vertical component is comparable to that of the horizontal components, indicating that residual troposphere error is not a limiting factor. The translation parameters indicate that the DORIS network realizes a geocentric frame to about 50 nm in each component. The considerable amount of data provided by the nearly global, all-weather DORIS network allowed this complex parameterization required to reduce the unmodeled forces acting on the low-earth satellite. Site velocities with accuracies better than 10 mm/yr should certainly be possible using the multiyear span of the SPOT series and Topex/Poseidon missions.

  1. Measurement of absolute hadronic branching fractions of D mesons

    NASA Astrophysics Data System (ADS)

    Shi, Xin

    Using 818 pb-1 of e +e- collisions recorded at the psi(3770) resonance with the CLEO-c detector at CESR, we determine absolute hadronic branching fractions of charged and neutral D mesons using a double tag technique. Among measurements for three D 0 and six D+ modes, we obtain reference branching fractions B (D0 → K -pi+) = (3.906 +/- 0.021 +/- 0.062)% and B (D+ → K -pi+pi+) = (9.157 +/- 0.059 +/- 0.125)%, where the first uncertainty is statistical, the second is systematic errors. Using an independent determination of the integrated luminosity, we also extract the cross sections sigma(e +e- → D 0D¯0) = (3.650 +/- 0.017 +/- 0.083) nb and sigma(e+ e- → D+ D-) = (2.920 +/- 0.018 +/- 0.062) nb at a center of mass energy, Ecm = 3774 +/- 1 MeV.

  2. Absolute blood velocity measured with a modified fundus camera

    NASA Astrophysics Data System (ADS)

    Duncan, Donald D.; Lemaillet, Paul; Ibrahim, Mohamed; Nguyen, Quan Dong; Hiller, Matthias; Ramella-Roman, Jessica

    2010-09-01

    We present a new method for the quantitative estimation of blood flow velocity, based on the use of the Radon transform. The specific application is for measurement of blood flow velocity in the retina. Our modified fundus camera uses illumination from a green LED and captures imagery with a high-speed CCD camera. The basic theory is presented, and typical results are shown for an in vitro flow model using blood in a capillary tube. Subsequently, representative results are shown for representative fundus imagery. This approach provides absolute velocity and flow direction along the vessel centerline or any lateral displacement therefrom. We also provide an error analysis allowing estimation of confidence intervals for the estimated velocity.

  3. Application of wavelet neural network model based on genetic algorithm in the prediction of high-speed railway settlement

    NASA Astrophysics Data System (ADS)

    Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing

    2015-12-01

    With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.

  4. The study of absolute distance measurement based on the self-mixing interference in laser diode

    NASA Astrophysics Data System (ADS)

    Wang, Ting-ting; Zhang, Chuang

    2009-07-01

    In this work, an absolute distance measurement method based on the self-mixing interference is presented. The principles of the method used three-mirror cavity equivalent model are studied in this paper, and the mathematical model is given. Wavelength modulation of the laser beam is obtained by saw-tooth modulating the infection current of the laser diode. Absolute distance of the external target is determined by Fourier analysis method. The frequency of signal from PD is linearly dependent on absolute distance, but also affected by temperature and fluctuation of current source. A dual-path method which uses the reference technique for absolute distance measurement has been proposed. The theoretical analysis shows that the method can eliminate errors resulting from distance-independent variations in the setup. Accuracy and stability can be improved. Simulated results show that a resolution of +/-0.2mm can be achieved for absolute distance ranging from 250mm to 500mm. In the same measurement range, the resolution we obtained is better than other absolute distance measurement system proposed base on self-mixing interference.

  5. Absolute Distance Measurement with the MSTAR Sensor

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert; Burger, Johan; Ahn, Seh-Won; Steier, William H.; Fetterman, Harrold R.; Chang, Yian

    2003-01-01

    The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. The sensor uses a single laser in conjunction with fast phase modulators and low frequency detectors. We describe the design of the system - the principle of operation, the metrology source, beamlaunching optics, and signal processing - and show results for target distances up to 1 meter. We then demonstrate how the system can be scaled to kilometer-scale distances.

  6. Position Error Covariance Matrix Validation and Correction

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  7. Locomotor Expertise Predicts Infants' Perseverative Errors

    ERIC Educational Resources Information Center

    Berger, Sarah E.

    2010-01-01

    This research examined the development of inhibition in a locomotor context. In a within-subjects design, infants received high- and low-demand locomotor A-not-B tasks. In Experiment 1, walking 13-month-old infants followed an indirect path to a goal. In a control condition, infants took a direct route. In Experiment 2, crawling and walking…

  8. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  9. Comparative vs. Absolute Judgments of Trait Desirability

    ERIC Educational Resources Information Center

    Hofstee, Willem K. B.

    1970-01-01

    Reversals of trait desirability are studied. Terms indicating conservativw behavior appeared to be judged relatively desirable in comparative judgement, while traits indicating dynamic and expansive behavior benefited from absolute judgement. The reversal effect was shown to be a general one, i.e. reversals were not dependent upon the specific…

  10. New Techniques for Absolute Gravity Measurements.

    DTIC Science & Technology

    1983-01-07

    Hammond, J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J. A., and Iliff, R. L. (1979) The AFGL absolute gravity system...International Gravimetric Bureau, No. L:I-43. 7. Hammond. J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J.A., and

  11. An Absolute Electrometer for the Physics Laboratory

    ERIC Educational Resources Information Center

    Straulino, S.; Cartacci, A.

    2009-01-01

    A low-cost, easy-to-use absolute electrometer is presented: two thin metallic plates and an electronic balance, usually available in a laboratory, are used. We report on the very good performance of the device that allows precise measurements of the force acting between two charged plates. (Contains 5 footnotes, 2 tables, and 6 figures.)

  12. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  13. Absolute Positioning Using the Global Positioning System

    DTIC Science & Technology

    1994-04-01

    Global Positioning System ( GPS ) has becom a useful tool In providing relativ survey...Includes the development of a low cost navigator for wheeled vehicles. ABSTRACT The Global Positioning System ( GPS ) has become a useful tool In providing...technique of absolute or point positioning involves the use of a single Global Positioning System ( GPS ) receiver to determine the three-dimenslonal

  14. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  15. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  16. Predictability of Intraocular Lens Power Calculation Formulae in Infantile Eyes With Unilateral Congenital Cataract: Results from the Infant Aphakia Treatment Study

    PubMed Central

    VANDERVEEN, DEBORAH K.; TRIVEDI, RUPAL H.; NIZAM, AZHAR; LYNN, MICHAEL J.; LAMBERT, SCOTT R.

    2014-01-01

    PURPOSE To compare accuracy of intraocular lens (IOL) power calculation formulae in infantile eyes with primary IOL implantation. DESIGN Comparative case series. METHODS The Hoffer Q, Holladay 1, Holladay 2, Sanders-Retzlaff-Kraff (SRK) II, and Sanders-Retzlaff-Kraff theoretic (SRK/T) formulae were used to calculate predicted postoperative refraction for eyes that received primary IOL implantation in the Infant Aphakia Treatment Study. The protocol targeted postoperative hyperopia of +6.0 or +8.0 diopters (D). Eyes were excluded for invalid biometry, lack of refractive data at the specified postoperative visit, diagnosis of glaucoma or suspected glaucoma, or sulcus IOL placement. Actual refraction 1 month after surgery was converted to spherical equivalent and prediction error (predicted refraction – actual refraction) was calculated. Baseline characteristics were analyzed for effect on prediction error for each formula. The main outcome measure was absolute prediction error. RESULTS Forty-three eyes were studied; mean axial length was 18.1 ± 1.1 mm (in 23 eyes, it was <18.0 mm). Average age at surgery was 2.5 ± 1.5 months. Holladay 1 showed the lowest median absolute prediction error (1.2 D); a paired comparison of medians showed clinically similar results using the Holladay 1 and SRK/T formulae (median difference, 0.3 D). Comparison of the mean absolute prediction error showed the lowest values using the SRK/T formula (1.4 ± 1.1 D), followed by the Holladay 1 formula (1.7 ± 1.3 D). Calculations with an optimized constant showed the lowest values and no significant difference between the Holladay 1 and SRK/T formulae (median difference, 0.3 D). Eyes with globe AL of less than 18 mm had the largest mean and median prediction error and absolute prediction error, regardless of the formula used. CONCLUSIONS The Holladay 1 and SRK/T formulae gave equally good results and had the best predictive value for infant eyes. PMID:24011524

  17. Absolute Radiation Thermometry in the NIR

    NASA Astrophysics Data System (ADS)

    Bünger, L.; Taubert, R. D.; Gutschwager, B.; Anhalt, K.; Briaudeau, S.; Sadli, M.

    2017-04-01

    A near infrared (NIR) radiation thermometer (RT) for temperature measurements in the range from 773 K up to 1235 K was characterized and calibrated in terms of the "Mise en Pratique for the definition of the Kelvin" (MeP-K) by measuring its absolute spectral radiance responsivity. Using Planck's law of thermal radiation allows the direct measurement of the thermodynamic temperature independently of any ITS-90 fixed-point. To determine the absolute spectral radiance responsivity of the radiation thermometer in the NIR spectral region, an existing PTB monochromator-based calibration setup was upgraded with a supercontinuum laser system (0.45 μm to 2.4 μm) resulting in a significantly improved signal-to-noise ratio. The RT was characterized with respect to its nonlinearity, size-of-source effect, distance effect, and the consistency of its individual temperature measuring ranges. To further improve the calibration setup, a new tool for the aperture alignment and distance measurement was developed. Furthermore, the diffraction correction as well as the impedance correction of the current-to-voltage converter is considered. The calibration scheme and the corresponding uncertainty budget of the absolute spectral responsivity are presented. A relative standard uncertainty of 0.1 % (k=1) for the absolute spectral radiance responsivity was achieved. The absolute radiometric calibration was validated at four temperature values with respect to the ITS-90 via a variable temperature heatpipe blackbody (773 K ...1235 K) and at a gold fixed-point blackbody radiator (1337.33 K).

  18. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to