Lateral habenula neurons signal errors in the prediction of reward information
Bromberg-Martin, Ethan S.; Hikosaka, Okihide
2011-01-01
Humans and animals have a remarkable ability to predict future events, which they achieve by persistently searching their environment for sources of predictive information. Yet little is known about the neural systems that motivate this behavior. We hypothesized that information-seeking is assigned value by the same circuits that support reward-seeking, so that neural signals encoding conventional “reward prediction errors” include analogous “information prediction errors”. To test this we recorded from neurons in the lateral habenula, a nucleus which encodes reward prediction errors, while monkeys chose between cues that provided different amounts of information about upcoming rewards. We found that a subpopulation of lateral habenula neurons transmitted signals resembling information prediction errors, responding when reward information was unexpectedly cued, delivered, or denied. Their signals evaluated information sources reliably even when the animal’s decisions did not. These neurons could provide a common instructive signal for reward-seeking and information-seeking behavior. PMID:21857659
Dopamine neurons share common response function for reward prediction error
Eshel, Neir; Tian, Ju; Bukwich, Michael; Uchida, Naoshige
2016-01-01
Dopamine neurons are thought to signal reward prediction error, or the difference between actual and predicted reward. How dopamine neurons jointly encode this information, however, remains unclear. One possibility is that different neurons specialize in different aspects of prediction error; another is that each neuron calculates prediction error in the same way. We recorded from optogenetically-identified dopamine neurons in the lateral ventral tegmental area (VTA) while mice performed classical conditioning tasks. Our tasks allowed us to determine the full prediction error functions of dopamine neurons and compare them to each other. We found striking homogeneity among individual dopamine neurons: their responses to both unexpected and expected rewards followed the same function, just scaled up or down. As a result, we could describe both individual and population responses using just two parameters. Such uniformity ensures robust information coding, allowing each dopamine neuron to contribute fully to the prediction error signal. PMID:26854803
Schiffer, Anne-Marike; Ahlheim, Christiane; Wurm, Moritz F.; Schubotz, Ricarda I.
2012-01-01
Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts. PMID:22570715
Knowledge acquisition is governed by striatal prediction errors.
Pine, Alex; Sadeh, Noa; Ben-Yakov, Aya; Dudai, Yadin; Mendelsohn, Avi
2018-04-26
Discrepancies between expectations and outcomes, or prediction errors, are central to trial-and-error learning based on reward and punishment, and their neurobiological basis is well characterized. It is not known, however, whether the same principles apply to declarative memory systems, such as those supporting semantic learning. Here, we demonstrate with fMRI that the brain parametrically encodes the degree to which new factual information violates expectations based on prior knowledge and beliefs-most prominently in the ventral striatum, and cortical regions supporting declarative memory encoding. These semantic prediction errors determine the extent to which information is incorporated into long-term memory, such that learning is superior when incoming information counters strong incorrect recollections, thereby eliciting large prediction errors. Paradoxically, by the same account, strong accurate recollections are more amenable to being supplanted by misinformation, engendering false memories. These findings highlight a commonality in brain mechanisms and computational rules that govern declarative and nondeclarative learning, traditionally deemed dissociable.
The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning
Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.
2017-01-01
Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Data driven CAN node reliability assessment for manufacturing system
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Yuan, Yong; Lei, Yong
2017-01-01
The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.
A Conceptual Framework for Predicting Error in Complex Human-Machine Environments
NASA Technical Reports Server (NTRS)
Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)
1998-01-01
We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.
Development of Predictive Energy Management Strategies for Hybrid Electric Vehicles
NASA Astrophysics Data System (ADS)
Baker, David
Studies have shown that obtaining and utilizing information about the future state of vehicles can improve vehicle fuel economy (FE). However, there has been a lack of research into the impact of real-world prediction error on FE improvements, and whether near-term technologies can be utilized to improve FE. This study seeks to research the effect of prediction error on FE. First, a speed prediction method is developed, and trained with real-world driving data gathered only from the subject vehicle (a local data collection method). This speed prediction method informs a predictive powertrain controller to determine the optimal engine operation for various prediction durations. The optimal engine operation is input into a high-fidelity model of the FE of a Toyota Prius. A tradeoff analysis between prediction duration and prediction fidelity was completed to determine what duration of prediction resulted in the largest FE improvement. Results demonstrate that 60-90 second predictions resulted in the highest FE improvement over the baseline, achieving up to a 4.8% FE increase. A second speed prediction method utilizing simulated vehicle-to-vehicle (V2V) communication was developed to understand if incorporating near-term technologies could be utilized to further improve prediction fidelity. This prediction method produced lower variation in speed prediction error, and was able to realize a larger FE improvement over the local prediction method for longer prediction durations, achieving up to 6% FE improvement. This study concludes that speed prediction and prediction-informed optimal vehicle energy management can produce FE improvements with real-world prediction error and drive cycle variability, as up to 85% of the FE benefit of perfect speed prediction was achieved with the proposed prediction methods.
Method and apparatus for faulty memory utilization
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2016-04-19
A method for faulty memory utilization in a memory system includes: obtaining information regarding memory health status of at least one memory page in the memory system; determining an error tolerance of the memory page when the information regarding memory health status indicates that a failure is predicted to occur in an area of the memory system affecting the memory page; initiating a migration of data stored in the memory page when it is determined that the data stored in the memory page is non-error-tolerant; notifying at least one application regarding a predicted operating system failure and/or a predicted application failure when it is determined that data stored in the memory page is non-error-tolerant and cannot be migrated; and notifying at least one application regarding the memory failure predicted to occur when it is determined that data stored in the memory page is error-tolerant.
High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.
Wang, Fei; Xie, Zhaoxin; Chen, Zuo
2014-01-01
Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.
Bayesian Integration of Information in Hippocampal Place Cells
Madl, Tamas; Franklin, Stan; Chen, Ke; Montaldi, Daniela; Trappl, Robert
2014-01-01
Accurate spatial localization requires a mechanism that corrects for errors, which might arise from inaccurate sensory information or neuronal noise. In this paper, we propose that Hippocampal place cells might implement such an error correction mechanism by integrating different sources of information in an approximately Bayes-optimal fashion. We compare the predictions of our model with physiological data from rats. Our results suggest that useful predictions regarding the firing fields of place cells can be made based on a single underlying principle, Bayesian cue integration, and that such predictions are possible using a remarkably small number of model parameters. PMID:24603429
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Curiosity and reward: Valence predicts choice and information prediction errors enhance learning.
Marvin, Caroline B; Shohamy, Daphna
2016-03-01
Curiosity drives many of our daily pursuits and interactions; yet, we know surprisingly little about how it works. Here, we harness an idea implied in many conceptualizations of curiosity: that information has value in and of itself. Reframing curiosity as the motivation to obtain reward-where the reward is information-allows one to leverage major advances in theoretical and computational mechanisms of reward-motivated learning. We provide new evidence supporting 2 predictions that emerge from this framework. First, we find an asymmetric effect of positive versus negative information, with positive information enhancing both curiosity and long-term memory for information. Second, we find that it is not the absolute value of information that drives learning but, rather, the gap between the reward expected and reward received, an "information prediction error." These results support the idea that information functions as a reward, much like money or food, guiding choices and driving learning in systematic ways. (c) 2016 APA, all rights reserved).
Predictability Experiments With the Navy Operational Global Atmospheric Prediction System
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.
2003-12-01
There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.
García-García, Isabel; Zeighami, Yashar; Dagher, Alain
2017-06-01
Surprises are important sources of learning. Cognitive scientists often refer to surprises as "reward prediction errors," a parameter that captures discrepancies between expectations and actual outcomes. Here, we integrate neurophysiological and functional magnetic resonance imaging (fMRI) results addressing the processing of reward prediction errors and how they might be altered in drug addiction and Parkinson's disease. By increasing phasic dopamine responses, drugs might accentuate prediction error signals, causing increases in fMRI activity in mesolimbic areas in response to drugs. Chronic substance dependence, by contrast, has been linked with compromised dopaminergic function, which might be associated with blunted fMRI responses to pleasant non-drug stimuli in mesocorticolimbic areas. In Parkinson's disease, dopamine replacement therapies seem to induce impairments in learning from negative outcomes. The present review provides a holistic overview of reward prediction errors across different pathologies and might inform future clinical strategies targeting impulsive/compulsive disorders.
Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D
2018-05-18
Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.
Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing
Lefebvre, Germain; Blakemore, Sarah-Jayne
2017-01-01
Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice. PMID:28800597
Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing.
Palminteri, Stefano; Lefebvre, Germain; Kilford, Emma J; Blakemore, Sarah-Jayne
2017-08-01
Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.
Efficient Reduction and Analysis of Model Predictive Error
NASA Astrophysics Data System (ADS)
Doherty, J.
2006-12-01
Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.
Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J
2016-10-24
In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
The selective power of causality on memory errors.
Marsh, Jessecae K; Kulkofsky, Sarah
2015-01-01
We tested the influence of causal links on the production of memory errors in a misinformation paradigm. Participants studied a set of statements about a person, which were presented as either individual statements or pairs of causally linked statements. Participants were then provided with causally plausible and causally implausible misinformation. We hypothesised that studying information connected with causal links would promote representing information in a more abstract manner. As such, we predicted that causal information would not provide an overall protection against memory errors, but rather would preferentially help in the rejection of misinformation that was causally implausible, given the learned causal links. In two experiments, we measured whether the causal linkage of information would be generally protective against all memory errors or only selectively protective against certain types of memory errors. Causal links helped participants reject implausible memory lures, but did not protect against plausible lures. Our results suggest that causal information may promote an abstract storage of information that helps prevent only specific types of memory errors.
Masking of errors in transmission of VAPC-coded speech
NASA Technical Reports Server (NTRS)
Cox, Neil B.; Froese, Edwin L.
1990-01-01
A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.
NASA Astrophysics Data System (ADS)
Wan, S.; He, W.
2016-12-01
The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Consequences of land-cover misclassification in models of impervious surface
McMahon, G.
2007-01-01
Model estimates of impervious area as a function of landcover area may be biased and imprecise because of errors in the land-cover classification. This investigation of the effects of land-cover misclassification on impervious surface models that use National Land Cover Data (NLCD) evaluates the consequences of adjusting land-cover within a watershed to reflect uncertainty assessment information. Model validation results indicate that using error-matrix information to adjust land-cover values used in impervious surface models does not substantially improve impervious surface predictions. Validation results indicate that the resolution of the landcover data (Level I and Level II) is more important in predicting impervious surface accurately than whether the land-cover data have been adjusted using information in the error matrix. Level I NLCD, adjusted for land-cover misclassification, is preferable to the other land-cover options for use in models of impervious surface. This result is tied to the lower classification error rates for the Level I NLCD. ?? 2007 American Society for Photogrammetry and Remote Sensing.
Hierarchical models for informing general biomass equations with felled tree data
Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke
2015-01-01
We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
Adaptive control of theophylline therapy: importance of blood sampling times.
D'Argenio, D Z; Khakmahd, K
1983-10-01
A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Adaptive plasticity in speech perception: Effects of external information and internal predictions.
Guediche, Sara; Fiez, Julie A; Holt, Lori L
2016-07-01
When listeners encounter speech under adverse listening conditions, adaptive adjustments in perception can improve comprehension over time. In some cases, these adaptive changes require the presence of external information that disambiguates the distorted speech signals, whereas in other cases mere exposure is sufficient. Both external (e.g., written feedback) and internal (e.g., prior word knowledge) sources of information can be used to generate predictions about the correct mapping of a distorted speech signal. We hypothesize that these predictions provide a basis for determining the discrepancy between the expected and actual speech signal that can be used to guide adaptive changes in perception. This study provides the first empirical investigation that manipulates external and internal factors through (a) the availability of explicit external disambiguating information via the presence or absence of postresponse orthographic information paired with a repetition of the degraded stimulus, and (b) the accuracy of internally generated predictions; an acoustic distortion is introduced either abruptly or incrementally. The results demonstrate that the impact of external information on adaptive plasticity is contingent upon whether the intelligibility of the stimuli permits accurate internally generated predictions during exposure. External information sources enhance adaptive plasticity only when input signals are severely degraded and cannot reliably access internal predictions. This is consistent with a computational framework for adaptive plasticity in which error-driven supervised learning relies on the ability to compute sensory prediction error signals from both internal and external sources of information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Adaptive plasticity in speech perception: effects of external information and internal predictions
Guediche, Sara; Fiez, Julie A.; Holt, Lori L.
2016-01-01
When listeners encounter speech under adverse listening conditions, adaptive adjustments in perception can improve comprehension over time. In some cases, these adaptive changes require the presence of external information that disambiguates the distorted speech signals, whereas in other cases mere exposure is sufficient. Both external (e.g. written feedback) and internal (e.g., prior word knowledge) sources of information can be used to generate predictions about the correct mapping of a distorted speech signal. We hypothesize that these predictions provide a basis for determining the discrepancy between the expected and actual speech signal that can be used to guide adaptive changes in perception. This study provides the first empirical investigation that manipulates external and internal factors through 1) the availability of explicit external disambiguating information via the presence or absence of post-response orthographic information paired with a repetition of the degraded stimulus, and 2) the accuracy of internally-generated predictions; an acoustic distortion is introduced either abruptly or incrementally. The results demonstrate that the impact of external information on adaptive plasticity is contingent upon whether the intelligibility of the stimuli permits accurate internally-generated predictions during exposure. External information sources enhance adaptive plasticity only when input signals are severely degraded and cannot reliably access internal predictions. This is consistent with a computational framework for adaptive plasticity in which error-driven supervised learning relies on the ability to compute sensory prediction error signals from both internal and external sources of information. PMID:26854531
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Disrupted prediction errors index social deficits in autism spectrum disorder
Balsters, Joshua H; Apps, Matthew A J; Bolis, Dimitris; Lehner, Rea; Gallagher, Louise; Wenderoth, Nicole
2017-01-01
Abstract Social deficits are a core symptom of autism spectrum disorder; however, the perturbed neural mechanisms underpinning these deficits remain unclear. It has been suggested that social prediction errors—coding discrepancies between the predicted and actual outcome of another’s decisions—might play a crucial role in processing social information. While the gyral surface of the anterior cingulate cortex signalled social prediction errors in typically developing individuals, this crucial social signal was altered in individuals with autism spectrum disorder. Importantly, the degree to which social prediction error signalling was aberrant correlated with diagnostic measures of social deficits. Effective connectivity analyses further revealed that, in typically developing individuals but not in autism spectrum disorder, the magnitude of social prediction errors was driven by input from the ventromedial prefrontal cortex. These data provide a novel insight into the neural substrates underlying autism spectrum disorder social symptom severity, and further research into the gyral surface of the anterior cingulate cortex and ventromedial prefrontal cortex could provide more targeted therapies to help ameliorate social deficits in autism spectrum disorder. PMID:28031223
Mackrous, I; Simoneau, M
2011-11-10
Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Effects of urban microcellular environments on ray-tracing-based coverage predictions.
Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing
2016-09-01
The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.
Evaluating and Predicting Patient Safety for Medical Devices With Integral Information Technology
2005-01-01
have the potential to become solid tools for manufacturers, purchasers, and consumers to evaluate patient safety issues in various health related...323 Evaluating and Predicting Patient Safety for Medical Devices with Integral Information Technology Jiajie Zhang, Vimla L. Patel, Todd R...errors are due to inappropriate designs for user interactions, rather than mechanical failures. Evaluating and predicting patient safety in medical
An error-tuned model for sensorimotor learning
Sadeghi, Mohsen; Wolpert, Daniel M.
2017-01-01
Current models of sensorimotor control posit that motor commands are generated by combining multiple modules which may consist of internal models, motor primitives or motor synergies. The mechanisms which select modules based on task requirements and modify their output during learning are therefore critical to our understanding of sensorimotor control. Here we develop a novel modular architecture for multi-dimensional tasks in which a set of fixed primitives are each able to compensate for errors in a single direction in the task space. The contribution of the primitives to the motor output is determined by both top-down contextual information and bottom-up error information. We implement this model for a task in which subjects learn to manipulate a dynamic object whose orientation can vary. In the model, visual information regarding the context (the orientation of the object) allows the appropriate primitives to be engaged. This top-down module selection is implemented by a Gaussian function tuned for the visual orientation of the object. Second, each module's contribution adapts across trials in proportion to its ability to decrease the current kinematic error. Specifically, adaptation is implemented by cosine tuning of primitives to the current direction of the error, which we show to be theoretically optimal for reducing error. This error-tuned model makes two novel predictions. First, interference should occur between alternating dynamics only when the kinematic errors associated with each oppose one another. In contrast, dynamics which lead to orthogonal errors should not interfere. Second, kinematic errors alone should be sufficient to engage the appropriate modules, even in the absence of contextual information normally provided by vision. We confirm both these predictions experimentally and show that the model can also account for data from previous experiments. Our results suggest that two interacting processes account for module selection during sensorimotor control and learning. PMID:29253869
Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model
NASA Astrophysics Data System (ADS)
Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.
2015-12-01
Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.
Metrics to quantify the importance of mixing state for CCN activity
Ching, Joseph; Fast, Jerome; West, Matthew; ...
2017-06-21
It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less
Metrics to quantify the importance of mixing state for CCN activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Joseph; Fast, Jerome; West, Matthew
It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less
Moments of inclination error distribution computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
Lindahl, Jonas; Danell, Rickard
The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty into risk when we are choosing decision thresholds in bibliometricly informed decision making. The significance of our results are discussed from the point of view of a science policy and management.
Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors
NASA Technical Reports Server (NTRS)
Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean
2016-01-01
A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?
Systematics errors in strong lens modeling
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren; Bayliss, Matthew B.
We investigate how varying the number of multiple image constraints and the available redshift information can influence the systematic errors of strong lens models, specifically, the image predictability, mass distribution, and magnifications of background sources. This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies.
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.
Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian
2014-03-01
Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.
Parallel photonic information processing at gigabyte per second data rates using transient states
NASA Astrophysics Data System (ADS)
Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo
2013-01-01
The increasing demands on information processing require novel computational concepts and true parallelism. Nevertheless, hardware realizations of unconventional computing approaches never exceeded a marginal existence. While the application of optics in super-computing receives reawakened interest, new concepts, partly neuro-inspired, are being considered and developed. Here we experimentally demonstrate the potential of a simple photonic architecture to process information at unprecedented data rates, implementing a learning-based approach. A semiconductor laser subject to delayed self-feedback and optical data injection is employed to solve computationally hard tasks. We demonstrate simultaneous spoken digit and speaker recognition and chaotic time-series prediction at data rates beyond 1Gbyte/s. We identify all digits with very low classification errors and perform chaotic time-series prediction with 10% error. Our approach bridges the areas of photonic information processing, cognitive and information science.
Gu, Xiaosi; Kirk, Ulrich; Lohrenz, Terry M; Montague, P Read
2014-08-01
Computational models of reward processing suggest that foregone or fictive outcomes serve as important information sources for learning and augment those generated by experienced rewards (e.g. reward prediction errors). An outstanding question is how these learning signals interact with top-down cognitive influences, such as cognitive reappraisal strategies. Using a sequential investment task and functional magnetic resonance imaging, we show that the reappraisal strategy selectively attenuates the influence of fictive, but not reward prediction error signals on investment behavior; such behavioral effect is accompanied by changes in neural activity and connectivity in the anterior insular cortex, a brain region thought to integrate subjective feelings with high-order cognition. Furthermore, individuals differ in the extent to which their behaviors are driven by fictive errors versus reward prediction errors, and the reappraisal strategy interacts with such individual differences; a finding also accompanied by distinct underlying neural mechanisms. These findings suggest that the variable interaction of cognitive strategies with two important classes of computational learning signals (fictive, reward prediction error) represent one contributing substrate for the variable capacity of individuals to control their behavior based on foregone rewards. These findings also expose important possibilities for understanding the lack of control in addiction based on possibly foregone rewarding outcomes. Copyright © 2013 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.
Homeostatic Regulation of Memory Systems and Adaptive Decisions
Mizumori, Sheri JY; Jo, Yong Sang
2013-01-01
While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic-regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The “multiple memory systems of the brain” have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience-dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in rigid and suboptimal decision making and memory as seen in addiction and neurological disease. © 2013 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:23929788
Homeostatic regulation of memory systems and adaptive decisions.
Mizumori, Sheri J Y; Jo, Yong Sang
2013-11-01
While it is clear that many brain areas process mnemonic information, understanding how their interactions result in continuously adaptive behaviors has been a challenge. A homeostatic-regulated prediction model of memory is presented that considers the existence of a single memory system that is based on a multilevel coordinated and integrated network (from cells to neural systems) that determines the extent to which events and outcomes occur as predicted. The "multiple memory systems of the brain" have in common output that signals errors in the prediction of events and/or their outcomes, although these signals differ in terms of what the error signal represents (e.g., hippocampus: context prediction errors vs. midbrain/striatum: reward prediction errors). The prefrontal cortex likely plays a pivotal role in the coordination of prediction analysis within and across prediction brain areas. By virtue of its widespread control and influence, and intrinsic working memory mechanisms. Thus, the prefrontal cortex supports the flexible processing needed to generate adaptive behaviors and predict future outcomes. It is proposed that prefrontal cortex continually and automatically produces adaptive responses according to homeostatic regulatory principles: prefrontal cortex may serve as a controller that is intrinsically driven to maintain in prediction areas an experience-dependent firing rate set point that ensures adaptive temporally and spatially resolved neural responses to future prediction errors. This same drive by prefrontal cortex may also restore set point firing rates after deviations (i.e. prediction errors) are detected. In this way, prefrontal cortex contributes to reducing uncertainty in prediction systems. An emergent outcome of this homeostatic view may be the flexible and adaptive control that prefrontal cortex is known to implement (i.e. working memory) in the most challenging of situations. Compromise to any of the prediction circuits should result in rigid and suboptimal decision making and memory as seen in addiction and neurological disease. Copyright © 2013 Wiley Periodicals, Inc.
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-07-14
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
NASA Astrophysics Data System (ADS)
Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
Taboo: Working memory and mental control in an interactive task
Hansen, Whitney A.; Goldinger, Stephen D.
2014-01-01
Individual differences in working memory (WM) predict principled variation in tasks of reasoning, response time, memory, and other abilities. Theoretically, a central function of WM is keeping task-relevant information easily accessible while suppressing irrelevant information. The present experiment was a novel study of mental control, using performance in the game Taboo as a measure. We tested effects of WM capacity on several indices, including perseveration errors (repeating previous guesses or clues) and taboo errors (saying at least part of a taboo or target word). By most measures, high-span participants were superior to low-span participants: High-spans were better at guessing answers, better at encouraging correct guesses from teammates, and less likely to either repeat themselves or produce taboo clues. Differences in taboo errors occurred only in an easy control condition. The results suggest that WM capacity predicts behavior in tasks requiring mental control, extending this finding to an interactive group setting. PMID:19827699
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Joch, Michael; Hegele, Mathias; Maurer, Heiko; Müller, Hermann; Maurer, Lisa Katharina
2017-07-01
The error (related) negativity (Ne/ERN) is an event-related potential in the electroencephalogram (EEG) correlating with error processing. Its conditions of appearance before terminal external error information suggest that the Ne/ERN is indicative of predictive processes in the evaluation of errors. The aim of the present study was to specifically examine the Ne/ERN in a complex motor task and to particularly rule out other explaining sources of the Ne/ERN aside from error prediction processes. To this end, we focused on the dependency of the Ne/ERN on visual monitoring about the action outcome after movement termination but before result feedback (action effect monitoring). Participants performed a semi-virtual throwing task by using a manipulandum to throw a virtual ball displayed on a computer screen to hit a target object. Visual feedback about the ball flying to the target was masked to prevent action effect monitoring. Participants received a static feedback about the action outcome (850 ms) after each trial. We found a significant negative deflection in the average EEG curves of the error trials peaking at ~250 ms after ball release, i.e., before error feedback. Furthermore, this Ne/ERN signal did not depend on visual ball-flight monitoring after release. We conclude that the Ne/ERN has the potential to indicate error prediction in motor tasks and that it exists even in the absence of action effect monitoring. NEW & NOTEWORTHY In this study, we are separating different kinds of possible contributors to an electroencephalogram (EEG) error correlate (Ne/ERN) in a throwing task. We tested the influence of action effect monitoring on the Ne/ERN amplitude in the EEG. We used a task that allows us to restrict movement correction and action effect monitoring and to control the onset of result feedback. We ascribe the Ne/ERN to predictive error processing where a conscious feeling of failure is not a prerequisite. Copyright © 2017 the American Physiological Society.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
NASA Astrophysics Data System (ADS)
Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.
2018-03-01
The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...
The Representation of Prediction Error in Auditory Cortex
Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali
2016-01-01
To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251
The modulation of savouring by prediction error and its effects on choice
Iigaya, Kiyohito; Story, Giles W; Kurth-Nelson, Zeb; Dolan, Raymond J; Dayan, Peter
2016-01-01
When people anticipate uncertain future outcomes, they often prefer to know their fate in advance. Inspired by an idea in behavioral economics that the anticipation of rewards is itself attractive, we hypothesized that this preference of advance information arises because reward prediction errors carried by such information can boost the level of anticipation. We designed new empirical behavioral studies to test this proposal, and confirmed that subjects preferred advance reward information more strongly when they had to wait for rewards for a longer time. We formulated our proposal in a reinforcement-learning model, and we showed that our model could account for a wide range of existing neuronal and behavioral data, without appealing to ambiguous notions such as an explicit value for information. We suggest that such boosted anticipation significantly drives risk-seeking behaviors, most pertinently in gambling. DOI: http://dx.doi.org/10.7554/eLife.13747.001 PMID:27101365
Design of a final approach spacing tool for TRACON air traffic control
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Erzberger, Heinz; Bergeron, Hugh
1989-01-01
This paper describes an automation tool that assists air traffic controllers in the Terminal Radar Approach Control (TRACON) Facilities in providing safe and efficient sequencing and spacing of arrival traffic. The automation tool, referred to as the Final Approach Spacing Tool (FAST), allows the controller to interactively choose various levels of automation and advisory information ranging from predicted time errors to speed and heading advisories for controlling time error. FAST also uses a timeline to display current scheduling and sequencing information for all aircraft in the TRACON airspace. FAST combines accurate predictive algorithms and state-of-the-art mouse and graphical interface technology to present advisory information to the controller. Furthermore, FAST exchanges various types of traffic information and communicates with automation tools being developed for the Air Route Traffic Control Center. Thus it is part of an integrated traffic management system for arrival traffic at major terminal areas.
Suppression of Striatal Prediction Errors by the Prefrontal Cortex in Placebo Hypoalgesia.
Schenk, Lieven A; Sprenger, Christian; Onat, Selim; Colloca, Luana; Büchel, Christian
2017-10-04
Classical learning theories predict extinction after the discontinuation of reinforcement through prediction errors. However, placebo hypoalgesia, although mediated by associative learning, has been shown to be resistant to extinction. We tested the hypothesis that this is mediated by the suppression of prediction error processing through the prefrontal cortex (PFC). We compared pain modulation through treatment cues (placebo hypoalgesia, treatment context) with pain modulation through stimulus intensity cues (stimulus context) during functional magnetic resonance imaging in 48 male and female healthy volunteers. During acquisition, our data show that expectations are correctly learned and that this is associated with prediction error signals in the ventral striatum (VS) in both contexts. However, in the nonreinforced test phase, pain modulation and expectations of pain relief persisted to a larger degree in the treatment context, indicating that the expectations were not correctly updated in the treatment context. Consistently, we observed significantly stronger neural prediction error signals in the VS in the stimulus context compared with the treatment context. A connectivity analysis revealed negative coupling between the anterior PFC and the VS in the treatment context, suggesting that the PFC can suppress the expression of prediction errors in the VS. Consistent with this, a participant's conceptual views and beliefs about treatments influenced the pain modulation only in the treatment context. Our results indicate that in placebo hypoalgesia contextual treatment information engages prefrontal conceptual processes, which can suppress prediction error processing in the VS and lead to reduced updating of treatment expectancies, resulting in less extinction of placebo hypoalgesia. SIGNIFICANCE STATEMENT In aversive and appetitive reinforcement learning, learned effects show extinction when reinforcement is discontinued. This is thought to be mediated by prediction errors (i.e., the difference between expectations and outcome). Although reinforcement learning has been central in explaining placebo hypoalgesia, placebo hypoalgesic effects show little extinction and persist after the discontinuation of reinforcement. Our results support the idea that conceptual treatment beliefs bias the neural processing of expectations in a treatment context compared with a more stimulus-driven processing of expectations with stimulus intensity cues. We provide evidence that this is associated with the suppression of prediction error processing in the ventral striatum by the prefrontal cortex. This provides a neural basis for persisting effects in reinforcement learning and placebo hypoalgesia. Copyright © 2017 the authors 0270-6474/17/379715-09$15.00/0.
Bissonette, Gregory B; Roesch, Matthew R
2016-01-01
Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.
Roesch, Matthew R.
2017-01-01
Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036
Olson, Andrew; Halloran, Elizabeth; Romani, Cristina
2015-12-01
We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output. Copyright © 2015 Elsevier Ltd. All rights reserved.
Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Jeff D.; Wong, Raimond; Swaminath, Anand
Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less
Stochastic estimation of plant-available soil water under fluctuating water table depths
NASA Astrophysics Data System (ADS)
Or, Dani; Groeneveld, David P.
1994-12-01
Preservation of native valley-floor phreatophytes while pumping groundwater for export from Owens Valley, California, requires reliable predictions of plant water use. These predictions are compared with stored soil water within well field regions and serve as a basis for managing groundwater resources. Soil water measurement errors, variable recharge, unpredictable climatic conditions affecting plant water use, and modeling errors make soil water predictions uncertain and error-prone. We developed and tested a scheme based on soil water balance coupled with implementation of Kalman filtering (KF) for (1) providing physically based soil water storage predictions with prediction errors projected from the statistics of the various inputs, and (2) reducing the overall uncertainty in both estimates and predictions. The proposed KF-based scheme was tested using experimental data collected at a location on the Owens Valley floor where the water table was artificially lowered by groundwater pumping and later allowed to recover. Vegetation composition and per cent cover, climatic data, and soil water information were collected and used for developing a soil water balance. Predictions and updates of soil water storage under different types of vegetation were obtained for a period of 5 years. The main results show that: (1) the proposed predictive model provides reliable and resilient soil water estimates under a wide range of external conditions; (2) the predicted soil water storage and the error bounds provided by the model offer a realistic and rational basis for decisions such as when to curtail well field operation to ensure plant survival. The predictive model offers a practical means for accommodating simple aspects of spatial variability by considering the additional source of uncertainty as part of modeling or measurement uncertainty.
Search, Memory, and Choice Error: An Experiment
Sanjurjo, Adam
2015-01-01
Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356
Burillo, Almudena; Rodríguez-Sánchez, Belén; Ramiro, Ana; Cercenado, Emilia; Rodríguez-Créixems, Marta; Bouza, Emilio
2014-01-01
Microbiological confirmation of a urinary tract infection (UTI) takes 24-48 h. In the meantime, patients are usually given empirical antibiotics, sometimes inappropriately. We assessed the feasibility of sequentially performing a Gram stain and MALDI-TOF MS mass spectrometry (MS) on urine samples to anticipate clinically useful information. In May-June 2012, we randomly selected 1000 urine samples from patients with suspected UTI. All were Gram stained and those yielding bacteria of a single morphotype were processed for MALDI-TOF MS. Our sequential algorithm was correlated with the standard semiquantitative urine culture result as follows: Match, the information provided was anticipative of culture result; Minor error, the information provided was partially anticipative of culture result; Major error, the information provided was incorrect, potentially leading to inappropriate changes in antimicrobial therapy. A positive culture was obtained in 242/1000 samples. The Gram stain revealed a single morphotype in 207 samples, which were subjected to MALDI-TOF MS. The diagnostic performance of the Gram stain was: sensitivity (Se) 81.3%, specificity (Sp) 93.2%, positive predictive value (PPV) 81.3%, negative predictive value (NPV) 93.2%, positive likelihood ratio (+LR) 11.91, negative likelihood ratio (-LR) 0.20 and accuracy 90.0% while that of MALDI-TOF MS was: Se 79.2%, Sp 73.5, +LR 2.99, -LR 0.28 and accuracy 78.3%. The use of both techniques provided information anticipative of the culture result in 82.7% of cases, information with minor errors in 13.4% and information with major errors in 3.9%. Results were available within 1 h. Our serial algorithm provided information that was consistent or showed minor errors for 96.1% of urine samples from patients with suspected UTI. The clinical impacts of this rapid UTI diagnosis strategy need to be assessed through indicators of adequacy of treatment such as a reduced time to appropriate empirical treatment or earlier withdrawal of unnecessary antibiotics.
Mental workload prediction based on attentional resource allocation and information processing.
Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin
2015-01-01
Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.
Optimal averaging of soil moisture predictions from ensemble land surface model simulations
USDA-ARS?s Scientific Manuscript database
The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...
Simulating Memory Impairment for Child Sexual Abuse.
Newton, Jeremy W; Hobbs, Sue D
2015-08-01
The current study investigated effects of simulated memory impairment on recall of child sexual abuse (CSA) information. A total of 144 adults were tested for memory of a written CSA scenario in which they role-played as the victim. There were four experimental groups and two testing sessions. During Session 1, participants read a CSA story and recalled it truthfully (Genuine group), omitted CSA information (Omission group), exaggerated CSA information (Commission group), or did not recall the story at all (No Rehearsal group). One week later, at Session 2, all participants were told to recount the scenario truthfully, and their memory was then tested using free recall and cued recall questions. The Session 1 manipulation affected memory accuracy during Session 2. Specifically, compared with the Genuine group's performance, the Omission, Commission, or No Rehearsal groups' performance was characterized by increased omission and commission errors and decreased reporting of correct details. Victim blame ratings (i.e., victim responsibility and provocativeness) and participant gender predicted increased error and decreased accuracy, whereas perpetrator blame ratings predicted decreased error and increased accuracy. Findings are discussed in relation to factors that may affect memory for CSA information. Copyright © 2015 John Wiley & Sons, Ltd.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971
Qu, Conghui; Schuetz, Johanna M.; Min, Jeong Eun; Leach, Stephen; Daley, Denise; Spinelli, John J.; Brooks-Wilson, Angela; Graham, Jinko
2011-01-01
We describe a statistical approach to predict gender-labeling errors in candidate-gene association studies, when Y-chromosome markers have not been included in the genotyping set. The approach adds value to methods that consider only the heterozygosity of X-chromosome SNPs, by incorporating available information about the intensity of X-chromosome SNPs in candidate genes relative to autosomal SNPs from the same individual. To our knowledge, no published methods formalize a framework in which heterozygosity and relative intensity are simultaneously taken into account. Our method offers the advantage that, in the genotyping set, no additional space is required beyond that already assigned to X-chromosome SNPs in the candidate genes. We also show how the predictions can be used in a two-phase sampling design to estimate the gender-labeling error rates for an entire study, at a fraction of the cost of a conventional design. PMID:22303327
Heil, Lieke; Kwisthout, Johan; van Pelt, Stan; van Rooij, Iris; Bekkering, Harold
2018-01-01
Evidence is accumulating that our brains process incoming information using top-down predictions. If lower level representations are correctly predicted by higher level representations, this enhances processing. However, if they are incorrectly predicted, additional processing is required at higher levels to "explain away" prediction errors. Here, we explored the potential nature of the models generating such predictions. More specifically, we investigated whether a predictive processing model with a hierarchical structure and causal relations between its levels is able to account for the processing of agent-caused events. In Experiment 1, participants watched animated movies of "experienced" and "novice" bowlers. The results are in line with the idea that prediction errors at a lower level of the hierarchy (i.e., the outcome of how many pins fell down) slow down reporting of information at a higher level (i.e., which agent was throwing the ball). Experiments 2 and 3 suggest that this effect is specific to situations in which the predictor is causally related to the outcome. Overall, the study supports the idea that a hierarchical predictive processing model can account for the processing of observed action outcomes and that the predictions involved are specific to cases where action outcomes can be predicted based on causal knowledge.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro
2015-11-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. Copyright © 2015 the American Physiological Society.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N.; Iijima, Toshio
2015-01-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201
Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg
2017-09-18
Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this special population.
NASA Astrophysics Data System (ADS)
Xu, Y.; Jones, A. D.; Rhoades, A.
2017-12-01
Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.
Emotion blocks the path to learning under stereotype threat
Good, Catherine; Whiteman, Ronald C.; Maniscalco, Brian; Dweck, Carol S.
2012-01-01
Gender-based stereotypes undermine females’ performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments. PMID:21252312
Emotion blocks the path to learning under stereotype threat.
Mangels, Jennifer A; Good, Catherine; Whiteman, Ronald C; Maniscalco, Brian; Dweck, Carol S
2012-02-01
Gender-based stereotypes undermine females' performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments.
Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.
NASA Astrophysics Data System (ADS)
Moura, Antonio Divino; Hastenrath, Stefan
2004-07-01
Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.
Predictive error detection in pianists: a combined ERP and motion capture study
Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari
2013-01-01
Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID:24133428
Gray, Rob; Orn, Anders; Woodman, Tim
2017-02-01
Are pressure-induced performance errors in experts associated with novice-like skill execution (as predicted by reinvestment/conscious processing theories) or expert execution toward a result that the performer typically intends to avoid (as predicted by ironic processes theory)? The present study directly compared these predictions using a baseball pitching task with two groups of experienced pitchers. One group was shown only their target, while the other group was shown the target and an ironic (avoid) zone. Both groups demonstrated significantly fewer target hits under pressure. For the target-only group, this was accompanied by significant changes in expertise-related kinematic variables. In the ironic group, the number of pitches thrown in the ironic zone was significantly higher under pressure, and there were no significant changes in kinematics. These results suggest that information about an opponent can influence the mechanisms underlying pressure-induced performance errors.
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
The cerebellum for jocks and nerds alike.
Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.
The cerebellum for jocks and nerds alike
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338
Assessing uncertainty in high-resolution spatial climate data across the US Northeast.
Bishop, Daniel A; Beier, Colin M
2013-01-01
Local and regional-scale knowledge of climate change is needed to model ecosystem responses, assess vulnerabilities and devise effective adaptation strategies. High-resolution gridded historical climate (GHC) products address this need, but come with multiple sources of uncertainty that are typically not well understood by data users. To better understand this uncertainty in a region with a complex climatology, we conducted a ground-truthing analysis of two 4 km GHC temperature products (PRISM and NRCC) for the US Northeast using 51 Cooperative Network (COOP) weather stations utilized by both GHC products. We estimated GHC prediction error for monthly temperature means and trends (1980-2009) across the US Northeast and evaluated any landscape effects (e.g., elevation, distance from coast) on those prediction errors. Results indicated that station-based prediction errors for the two GHC products were similar in magnitude, but on average, the NRCC product predicted cooler than observed temperature means and trends, while PRISM was cooler for means and warmer for trends. We found no evidence for systematic sources of uncertainty across the US Northeast, although errors were largest at high elevations. Errors in the coarse-scale (4 km) digital elevation models used by each product were correlated with temperature prediction errors, more so for NRCC than PRISM. In summary, uncertainty in spatial climate data has many sources and we recommend that data users develop an understanding of uncertainty at the appropriate scales for their purposes. To this end, we demonstrate a simple method for utilizing weather stations to assess local GHC uncertainty and inform decisions among alternative GHC products.
NASA Astrophysics Data System (ADS)
Cisneros, Felipe; Veintimilla, Jaime
2013-04-01
The main aim of this research is to create a model of Artificial Neural Networks (ANN) that allows predicting the flow in Tomebamba River both, at real time and in a certain day of year. As inputs we are using information of rainfall and flow of the stations along of the river. This information is organized in scenarios and each scenario is prepared to a specific area. The information is acquired from the hydrological stations placed in the watershed using an electronic system developed at real time and it supports any kind or brands of this type of sensors. The prediction works very good three days in advance This research includes two ANN models: Back propagation and a hybrid model between back propagation and OWO-HWO. These last two models have been tested in a preliminary research. To validate the results we are using some error indicators such as: MSE, RMSE, EF, CD and BIAS. The results of this research reached high levels of reliability and the level of error are minimal. These predictions are useful for flood and water quality control and management at City of Cuenca Ecuador
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.
Digital Troposcatter Performance Model. Users Manual.
1983-11-01
and Information Systems - .,- - - UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE (When Data Entered) S REPORT DOCUIAENTATION PAGE READ...Diffraction Multipath Prediction MD-918 Modem Error Rate Prediction AN/TRC-170 Link Analysis 20. ABSTRACT (Continue en reverse esie If neceseay end...configurations used in the Defense Communications System (DCS), and prediction of the performance of both the MD-918 and AN/TRC-170 digital troposcatter modems
Popa, Laurentiu S.; Streng, Martha L.
2017-01-01
Abstract Most hypotheses of cerebellar function emphasize a role in real-time control of movements. However, the cerebellum’s use of current information to adjust future movements and its involvement in sequencing, working memory, and attention argues for predicting and maintaining information over extended time windows. The present study examines the time course of Purkinje cell discharge modulation in the monkey (Macaca mulatta) during manual, pseudo-random tracking. Analysis of the simple spike firing from 183 Purkinje cells during tracking reveals modulation up to 2 s before and after kinematics and position error. Modulation significance was assessed against trial shuffled firing, which decoupled simple spike activity from behavior and abolished long-range encoding while preserving data statistics. Position, velocity, and position errors have the most frequent and strongest long-range feedforward and feedback modulations, with less common, weaker long-term correlations for speed and radial error. Position, velocity, and position errors can be decoded from the population simple spike firing with considerable accuracy for even the longest predictive (-2000 to -1500 ms) and feedback (1500 to 2000 ms) epochs. Separate analysis of the simple spike firing in the initial hold period preceding tracking shows similar long-range feedforward encoding of the upcoming movement and in the final hold period feedback encoding of the just completed movement, respectively. Complex spike analysis reveals little long-term modulation with behavior. We conclude that Purkinje cell simple spike discharge includes short- and long-range representations of both upcoming and preceding behavior that could underlie cerebellar involvement in error correction, working memory, and sequencing. PMID:28413823
Small Area Variance Estimation for the Siuslaw NF in Oregon and Some Results
S. Lin; D. Boes; H.T. Schreuder
2006-01-01
The results of a small area prediction study for the Siuslaw National Forest in Oregon are presented. Predictions were made for total basal area, number of trees and mortality per ha on a 0.85 mile grid using data on a 1.7 mile grid and additional ancillary information from TM. A reliable method of estimating prediction errors for individual plot predictions called the...
NASA Astrophysics Data System (ADS)
Mashuri, Chamdan; Suryono; Suseno, Jatmiko Endro
2018-02-01
This research was conducted by prediction of safety stock using Fuzzy Time Series (FTS) and technology of Radio Frequency Identification (RFID) for stock control at Vendor Managed Inventory (VMI). Well-controlled stock influenced company revenue and minimized cost. It discussed about information system of safety stock prediction developed through programming language of PHP. Input data consisted of demand got from automatic, online and real time acquisition using technology of RFID, then, sent to server and stored at online database. Furthermore, data of acquisition result was predicted by using algorithm of FTS applying universe of discourse defining and fuzzy sets determination. Fuzzy set result was continued to division process of universe of discourse in order to be to final step. Prediction result was displayed at information system dashboard developed. By using 60 data from demand data, prediction score was 450.331 and safety stock was 135.535. Prediction result was done by error deviation validation using Mean Square Percent Error of 15%. It proved that FTS was good enough in predicting demand and safety stock for stock control. For deeper analysis, researchers used data of demand and universe of discourse U varying at FTS to get various result based on test data used.
Sánchez-López, E; Sánchez-Rodríguez, M I; Marinas, A; Marinas, J M; Urbano, F J; Caridad, J M; Moalem, M
2016-08-15
Authentication of extra virgin olive oil (EVOO) is an important topic for olive oil industry. The fraudulent practices in this sector are a major problem affecting both producers and consumers. This study analyzes the capability of FT-Raman combined with chemometric treatments of prediction of the fatty acid contents (quantitative information), using gas chromatography as the reference technique, and classification of diverse EVOOs as a function of the harvest year, olive variety, geographical origin and Andalusian PDO (qualitative information). The optimal number of PLS components that summarizes the spectral information was introduced progressively. For the estimation of the fatty acid composition, the lowest error (both in fitting and prediction) corresponded to MUFA, followed by SAFA and PUFA though such errors were close to zero in all cases. As regards the qualitative variables, discriminant analysis allowed a correct classification of 94.3%, 84.0%, 89.0% and 86.6% of samples for harvest year, olive variety, geographical origin and PDO, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
F. Mauro; Vicente J. Monleon; H. Temesgen; L.A. Ruiz
2017-01-01
Accounting for spatial correlation of LiDAR model errors can improve the precision of model-based estimators. To estimate spatial correlation, sample designs that provide close observations are needed, but their implementation might be prohibitively expensive. To quantify the gains obtained by accounting for the spatial correlation of model errors, we examined (
Prefrontal neural correlates of memory for sequences.
Averbeck, Bruno B; Lee, Daeyeol
2007-02-28
The sequence of actions appropriate to solve a problem often needs to be discovered by trial and error and recalled in the future when faced with the same problem. Here, we show that when monkeys had to discover and then remember a sequence of decisions across trials, ensembles of prefrontal cortex neurons reflected the sequence of decisions the animal would make throughout the interval between trials. This signal could reflect either an explicit memory process or a sequence-planning process that begins far in advance of the actual sequence execution. This finding extended to error trials such that, when the neural activity during the intertrial interval specified the wrong sequence, the animal also attempted to execute an incorrect sequence. More specifically, we used a decoding analysis to predict the sequence the monkey was planning to execute at the end of the fore-period, just before sequence execution. When this analysis was applied to error trials, we were able to predict where in the sequence the error would occur, up to three movements into the future. This suggests that prefrontal neural activity can retain information about sequences between trials, and that regardless of whether information is remembered correctly or incorrectly, the prefrontal activity veridically reflects the animal's action plan.
Experiential effects on mirror systems and social learning: implications for social intelligence.
Reader, Simon M
2014-04-01
Investigations of biases and experiential effects on social learning, social information use, and mirror systems can usefully inform one another. Unconstrained learning is predicted to shape mirror systems when the optimal response to an observed act varies, but constraints may emerge when immediate error-free responses are required and evolutionary or developmental history reliably predicts the optimal response. Given the power of associative learning, such constraints may be rare.
NASA Astrophysics Data System (ADS)
Huang, H. C.; Pan, L.; McQueen, J.; Lee, P.; ONeill, S. M.; Ruminski, M.; Shafran, P.; DiMego, G.; Huang, J.; Stajner, I.; Upadhayay, S.; Larkin, N. K.
2016-12-01
Wildfires contribute to air quality problems not only towards primary emissions of particular matters (PM) but also emitted ozone precursor gases that can lead to elevated ozone concentration. Wildfires are unpredictable and can be ignited by natural causes such as lightning or accidently by human negligent behavior such as live cigarette. Although wildfire impacts on the air quality can be studied by collecting fire information after events, it is extremely difficult to predict future occurrence and behavior of wildfires for real-time air quality forecasts. Because of the time constraints of operational air quality forecasting, assumption of future day's fire behavior often have to be made based on observed fire information in the past. The United States (U.S.) NOAA/NWS built the National Air Quality Forecast Capability (NAQFC) based on the U.S. EPA CMAQ to provide air quality forecast guidance (prediction) publicly. State and local forecasters use the forecast guidance to issue air quality alerts in their area. The NAQFC fine particulates (PM2.5) prediction includes emissions from anthropogenic and biogenic sources, as well as natural sources such as dust storms and fires. The fire emission input to the NAQFC is derived from the NOAA NESDIS HMS fire and smoke detection product and the emission module of the US Forest Service BlueSky Smoke Modeling Framework. This study focuses on the error estimation of NAQFC PM2.5 predictions resulting from fire emissions. The comparisons between the NAQFC modeled PM2.5 and the EPA AirNow surface observation show that present operational NAQFC fire emissions assumption can lead to a huge error in PM2.5 prediction as fire emissions are sometimes placed at wrong location and time. This PM2.5 prediction error can be propagated from the fire source in the Northwest U.S. to downstream areas as far as the Southeast U.S. From this study, a new procedure has been identified to minimize the aforementioned error. An additional 24 hours reanalysis-run of NAQFC using same-day observed fire emission are being tested. Preliminary results have shown that this procedure greatly improves the PM2.5 predictions at both nearby and downstream areas from fire sources. The 24 hours reanalysis-run is critical and necessary especially during extreme fire events to provide better PM2.5 predictions.
NASA Astrophysics Data System (ADS)
Declair, Stefan; Saint-Drenan, Yves-Marie; Potthast, Roland
2017-04-01
Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs) and wind and photovoltaic (PV) prediction errors require the use of reserve power, which generate costs and can - in extreme cases - endanger the security of supply. In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology develop innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key part in energy prediction process chains is the numerical weather prediction (NWP) system. Irradiation forecasts from NWP systems are however subject to several sources of error. For PV power prediction, weaknesses of the NWP model to correctly forecast i.e. low stratus, absorption of condensed water or aerosol optical depths are the main sources of errors. Inaccurate radiation schemes (i.e. the two-stream parametrization) are also known as a deficit of NWP systems with regard to irradiation forecast. To mitigate errors like these, latest observations can be used in a pre-processing technique called data assimilation (DA). In DA, not only the initial fields are provided, but the model is also synchronized with reality - the observations - and hence forecast errors are reduced. Besides conventional observation networks like radiosondes, synoptic observations or air reports of wind, pressure and humidity, the number of observations measuring meteorological information indirectly by means of remote sensing such as satellite radiances, radar reflectivities or GPS slant delays strongly increases. Numerous PV plants installed in Germany potentially represent a dense meteorological network assessing irradiation through their power measurements. Forecast accuracy may thus be enhanced by extending the observations in the assimilation by this new source of information. PV power plants can provide information on clouds, aerosol optical depth or low stratus in terms of remote sensing: the power output is strongly dependent on perturbations along the slant between sun position and PV panel. Since these data are not limited to the vertical column above or below the detector, it may thus complement satellite data and compensate weaknesses in the radiation scheme. In this contribution, the used DA technique (Local Ensemble Transform Kalman Filter, LETKF) is shortly sketched. Furthermore, the computation of the model power equivalents is described and first results are presented and discussed.
Using beta binomials to estimate classification uncertainty for ensemble models.
Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin
2014-01-01
Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.
Methods to Improve the Maintenance of the Earth Catalog of Satellites During Severe Solar Storms
NASA Technical Reports Server (NTRS)
Wilkin, Paul G.; Tolson, Robert H.
1998-01-01
The objective of this thesis is to investigate methods to improve the ability to maintain the inventory of orbital elements of Earth satellites during periods of atmospheric disturbance brought on by severe solar activity. Existing techniques do not account for such atmospheric dynamics, resulting in tracking errors of several seconds in predicted crossing time. Two techniques are examined to reduce of these tracking errors. First, density predicted from various atmospheric models is fit to the orbital decay rate for a number of satellites. An orbital decay model is then developed that could be used to reduce tracking errors by accounting for atmospheric changes. The second approach utilizes a Kalman filter to estimate the orbital decay rate of a satellite after every observation. The new information is used to predict the next observation. Results from the first approach demonstrated the feasibility of building an orbital decay model based on predicted atmospheric density. Correlation of atmospheric density to orbital decay was as high as 0.88. However, it is clear that contemporary: atmospheric models need further improvement in modeling density perturbations polar region brought on by solar activity. The second approach resulted in a dramatic reduction in tracking errors for certain satellites during severe solar Storms. For example, in the limited cases studied, the reduction in tracking errors ranged from 79 to 25 percent.
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
Routine cognitive errors: a trait-like predictor of individual differences in anxiety and distress.
Fetterman, Adam K; Robinson, Michael D
2011-02-01
Five studies (N=361) sought to model a class of errors--namely, those in routine tasks--that several literatures have suggested may predispose individuals to higher levels of emotional distress. Individual differences in error frequency were assessed in choice reaction-time tasks of a routine cognitive type. In Study 1, it was found that tendencies toward error in such tasks exhibit trait-like stability over time. In Study 3, it was found that tendencies toward error exhibit trait-like consistency across different tasks. Higher error frequency, in turn, predicted higher levels of negative affect, general distress symptoms, displayed levels of negative emotion during an interview, and momentary experiences of negative emotion in daily life (Studies 2-5). In all cases, such predictive relations remained significant with individual differences in neuroticism controlled. The results thus converge on the idea that error frequency in simple cognitive tasks is a significant and consequential predictor of emotional distress in everyday life. The results are novel, but discussed within the context of the wider literatures that informed them. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business
Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B
2018-08-01
Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.
Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K
2011-12-01
Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy. Copyright © 2011 Wiley-Liss, Inc.
Social learning through prediction error in the brain
NASA Astrophysics Data System (ADS)
Joiner, Jessica; Piva, Matthew; Turrin, Courtney; Chang, Steve W. C.
2017-06-01
Learning about the world is critical to survival and success. In social animals, learning about others is a necessary component of navigating the social world, ultimately contributing to increasing evolutionary fitness. How humans and nonhuman animals represent the internal states and experiences of others has long been a subject of intense interest in the developmental psychology tradition, and, more recently, in studies of learning and decision making involving self and other. In this review, we explore how psychology conceptualizes the process of representing others, and how neuroscience has uncovered correlates of reinforcement learning signals to explore the neural mechanisms underlying social learning from the perspective of representing reward-related information about self and other. In particular, we discuss self-referenced and other-referenced types of reward prediction errors across multiple brain structures that effectively allow reinforcement learning algorithms to mediate social learning. Prediction-based computational principles in the brain may be strikingly conserved between self-referenced and other-referenced information.
Lahat, Ayelet; Lamm, Connie; Chronis-Tuscano, Andrea; Pine, Daniel S.; Henderson, Heather A.; Fox, Nathan A.
2014-01-01
Objective Behavioral inhibition (BI) is an early childhood temperament characterized by fearful responses to novelty and avoidance of social interactions. During adolescence, a subset of children with stable childhood BI develop social anxiety disorder and concurrently exhibit increased error monitoring. The current study examines whether increased error monitoring in seven-year-old behaviorally inhibited children prospectively predicts risk for symptoms of social phobia at age 9. Method Two hundred and ninety one children were characterized on BI at 24 and 36 months of age. Children were seen again at 7 years of age, where they performed a Flanker task, and event-related-potential (ERP) indices of response monitoring were generated. At age 9, self- and maternal-report of social phobia symptoms were obtained. Results Children high in BI, compared to those low in BI, displayed increased error monitoring at age 7, as indexed by larger (i.e., more negative) error-related negativity (ERN) amplitudes. Additionally, early BI was related to later childhood social phobia symptoms at age 9 among children with a large difference in amplitude between ERN and correct-response negativity (CRN) at age 7. Conclusions Heightened error monitoring predicts risk for later social phobia symptoms in children with high BI. Research assessing response monitoring in children with BI may refine our understanding of the mechanisms underlying risk for later anxiety disorders and inform prevention efforts. PMID:24655654
Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.
2015-01-01
Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.
Learning and Prediction of Slip from Visual Information
NASA Technical Reports Server (NTRS)
Angelova, Anelia; Matthies, Larry; Helmick, Daniel; Perona, Pietro
2007-01-01
This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and geometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers.
MO-G-18C-05: Real-Time Prediction in Free-Breathing Perfusion MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, H; Liu, W; Ruan, D
Purpose: The aim is to minimize frame-wise difference errors caused by respiratory motion and eliminate the need for breath-holds in magnetic resonance imaging (MRI) sequences with long acquisitions and repeat times (TRs). The technique is being applied to perfusion MRI using arterial spin labeling (ASL). Methods: Respiratory motion prediction (RMP) using navigator echoes was implemented in ASL. A least-square method was used to extract the respiratory motion information from the 1D navigator. A generalized artificial neutral network (ANN) with three layers was developed to simultaneously predict 10 time points forward in time and correct for respiratory motion during MRI acquisition.more » During the training phase, the parameters of the ANN were optimized to minimize the aggregated prediction error based on acquired navigator data. During realtime prediction, the trained ANN was applied to the most recent estimated displacement trajectory to determine in real-time the amount of spatial Results: The respiratory motion information extracted from the least-square method can accurately represent the navigator profiles, with a normalized chi-square value of 0.037±0.015 across the training phase. During the 60-second training phase, the ANN successfully learned the respiratory motion pattern from the navigator training data. During real-time prediction, the ANN received displacement estimates and predicted the motion in the continuum of a 1.0 s prediction window. The ANN prediction was able to provide corrections for different respiratory states (i.e., inhalation/exhalation) during real-time scanning with a mean absolute error of < 1.8 mm. Conclusion: A new technique enabling free-breathing acquisition during MRI is being developed. A generalized ANN development has demonstrated its efficacy in predicting a continuum of motion profile for volumetric imaging based on navigator inputs. Future work will enhance the robustness of ANN and verify its effectiveness with human subjects. Research supported by National Institutes of Health National Cancer Institute Grant R01 CA159471-01.« less
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377
ERIC Educational Resources Information Center
Dale, P. S.; Mills, P. E.; Cole, K. N.; Jenkins, J. R.
2004-01-01
Long-term follow-up information on children who have participated in early childhood special education (ECSE) has seldom been available. In the present study, the cognitive and academic performance of 171 thirteen-year-old graduates of 2 ECSE curricula is examined. Although preschool cognitive measures continued to predict later performance…
The NIEHS Predictive-Toxicology Evaluation Project.
Bristol, D W; Wachsman, J T; Greenwell, A
1996-01-01
The Predictive-Toxicology Evaluation (PTE) project conducts collaborative experiments that subject the performance of predictive-toxicology (PT) methods to rigorous, objective evaluation in a uniquely informative manner. Sponsored by the National Institute of Environmental Health Sciences, it takes advantage of the ongoing testing conducted by the U.S. National Toxicology Program (NTP) to estimate the true error of models that have been applied to make prospective predictions on previously untested, noncongeneric-chemical substances. The PTE project first identifies a group of standardized NTP chemical bioassays either scheduled to be conducted or are ongoing, but not yet complete. The project then announces and advertises the evaluation experiment, disseminates information about the chemical bioassays, and encourages researchers from a wide variety of disciplines to publish their predictions in peer-reviewed journals, using whatever approaches and methods they feel are best. A collection of such papers is published in this Environmental Health Perspectives Supplement, providing readers the opportunity to compare and contrast PT approaches and models, within the context of their prospective application to an actual-use situation. This introduction to this collection of papers on predictive toxicology summarizes the predictions made and the final results obtained for the 44 chemical carcinogenesis bioassays of the first PTE experiment (PTE-1) and presents information that identifies the 30 chemical carcinogenesis bioassays of PTE-2, along with a table of prediction sets that have been published to date. It also provides background about the origin and goals of the PTE project, outlines the special challenge associated with estimating the true error of models that aspire to predict open-system behavior, and summarizes what has been learned to date. PMID:8933048
Geometric error analysis for shuttle imaging spectrometer experiment
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.
1984-01-01
The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.
Stereotype threat can reduce older adults' memory errors.
Barber, Sarah J; Mather, Mara
2013-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.
Chana, Narinder; Porat, Talya; Whittlesea, Cate; Delaney, Brendan
2017-03-01
Electronic prescribing has benefited from computerised clinical decision support systems (CDSSs); however, no published studies have evaluated the potential for a CDSS to support GPs in prescribing specialist drugs. To identify potential weaknesses and errors in the existing process of prescribing specialist drugs that could be addressed in the development of a CDSS. Semi-structured interviews with key informants followed by an observational study involving GPs in the UK. Twelve key informants were interviewed to investigate the use of CDSSs in the UK. Nine GPs were observed while performing case scenarios depicting requests from hospitals or patients to prescribe a specialist drug. Activity diagrams, hierarchical task analysis, and systematic human error reduction and prediction approach analyses were performed. The current process of prescribing specialist drugs by GPs is prone to error. Errors of omission due to lack of information were the most common errors, which could potentially result in a GP prescribing a specialist drug that should only be prescribed in hospitals, or prescribing a specialist drug without reference to a shared care protocol. Half of all possible errors in the prescribing process had a high probability of occurrence. A CDSS supporting GPs during the process of prescribing specialist drugs is needed. This could, first, support the decision making of whether or not to undertake prescribing, and, second, provide drug-specific parameters linked to shared care protocols, which could reduce the errors identified and increase patient safety. © British Journal of General Practice 2017.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Auerswald, Karl; Schäufele, Rudi; Bellof, Gerhard
2015-12-09
Dairy production systems vary widely in their feeding and livestock-keeping regimens. Both are well-known to affect milk quality and consumer perceptions. Stable isotope analysis has been suggested as an easy-to-apply tool to validate a claimed feeding regimen. Although it is unambiguous that feeding influences the carbon isotope composition (δ(13)C) in milk, it is not clear whether a reported feeding regimen can be verified by measuring δ(13)C in milk without sampling and analyzing the feed. We obtained 671 milk samples from 40 farms distributed over Central Europe to measure δ(13)C and fatty acid composition. Feeding protocols by the farmers in combination with a model based on δ(13)C feed values from the literature were used to predict δ(13)C in feed and subsequently in milk. The model considered dietary contributions of C3 and C4 plants, contribution of concentrates, altitude, seasonal variation in (12/13)CO2, Suess's effect, and diet-milk discrimination. Predicted and measured δ(13)C in milk correlated closely (r(2) = 0.93). Analyzing milk for δ(13)C allowed validation of a reported C4 component with an error of <8% in 95% of all cases. This included the error of the method (measurement and prediction) and the error of the feeding information. However, the error was not random but varied seasonally and correlated with the seasonal variation in long-chain fatty acids. This indicated a bypass of long-chain fatty acids from fresh grass to milk.
Lahat, Ayelet; Lamm, Connie; Chronis-Tuscano, Andrea; Pine, Daniel S; Henderson, Heather A; Fox, Nathan A
2014-04-01
Behavioral inhibition (BI) is an early childhood temperament characterized by fearful responses to novelty and avoidance of social interactions. During adolescence, a subset of children with stable childhood BI develop social anxiety disorder and concurrently exhibit increased error monitoring. The current study examines whether increased error monitoring in 7-year-old, behaviorally inhibited children prospectively predicts risk for symptoms of social phobia at age 9 years. A total of 291 children were characterized on BI at 24 and 36 months of age. Children were seen again at 7 years of age, when they performed a Flanker task, and event-related potential (ERP) indices of response monitoring were generated. At age 9, self- and maternal-report of social phobia symptoms were obtained. Children high in BI, compared to those low in BI, displayed increased error monitoring at age 7, as indexed by larger (i.e., more negative) error-related negativity (ERN) amplitudes. In addition, early BI was related to later childhood social phobia symptoms at age 9 among children with a large difference in amplitude between ERN and correct-response negativity (CRN) at age 7. Heightened error monitoring predicts risk for later social phobia symptoms in children with high BI. Research assessing response monitoring in children with BI may refine our understanding of the mechanisms underlying risk for later anxiety disorders and inform prevention efforts. Copyright © 2014 American Academy of Child and Adolescent Psychiatry. All rights reserved.
Remembering a criminal conversation: beyond eyewitness testimony.
Campos, Laura; Alonso-Quecuty, María L
2006-01-01
Unlike the important body of work on eyewitness memory, little research has been done on the accuracy and completeness of "earwitness" memory for conversations. The present research examined the effects of mode of presentation (audiovisual/ auditory-only) on witnesses' free recall for utterances in a criminal conversation at different retention intervals (immediate/delayed) within a single experiment. Different forms of correct recall (verbatim/gist) of the verbal information as well as different types of errors (distortions/fabrications) were also examined. It was predicted that participants in the audiovisual modality would provide more correct information, and fewer errors than participants in the auditory-only modality. Participants' recall was predicted to be impaired over time, dropping to a greater extent after a delay in the auditory-only modality. Results confirmed these hypotheses. Interpretations of the overall findings are offered within the context of dual-coding theory, and within the theoretical frameworks of source monitoring and fuzzy-trace theory.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
Characterization of errors in a coupled snow hydrology-microwave emission model
Andreadis, K.M.; Liang, D.; Tsang, L.; Lettenmaier, D.P.; Josberger, E.G.
2008-01-01
Traditional approaches to the direct estimation of snow properties from passive microwave remote sensing have been plagued by limitations such as the tendency of estimates to saturate for moderately deep snowpacks and the effects of mixed land cover within remotely sensed pixels. An alternative approach is to assimilate satellite microwave emission observations directly, which requires embedding an accurate microwave emissions model into a hydrologic prediction scheme, as well as quantitative information of model and observation errors. In this study a coupled snow hydrology [Variable Infiltration Capacity (VIC)] and microwave emission [Dense Media Radiative Transfer (DMRT)] model are evaluated using multiscale brightness temperature (TB) measurements from the Cold Land Processes Experiment (CLPX). The ability of VIC to reproduce snowpack properties is shown with the use of snow pit measurements, while TB model predictions are evaluated through comparison with Ground-Based Microwave Radiometer (GBMR), air-craft [Polarimetric Scanning Radiometer (PSR)], and satellite [Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E)] TB measurements. Limitations of the model at the point scale were not as evident when comparing areal estimates. The coupled model was able to reproduce the TB spatial patterns observed by PSR in two of three sites. However, this was mostly due to the presence of relatively dense forest cover. An interesting result occurs when examining the spatial scaling behavior of the higher-resolution errors; the satellite-scale error is well approximated by the mode of the (spatial) histogram of errors at the smaller scale. In addition, TB prediction errors were almost invariant when aggregated to the satellite scale, while forest-cover fractions greater than 30% had a significant effect on TB predictions. ?? 2008 American Meteorological Society.
Andreopoulos, Bill; Winter, Christof; Labudde, Dirk; Schroeder, Michael
2009-06-27
A lot of high-throughput studies produce protein-protein interaction networks (PPINs) with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs) were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs) representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS). PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that relatively little structural information would be sufficient for finding complexes involving most of the proteins and interactions in a typical PPIN.
Andreopoulos, Bill; Winter, Christof; Labudde, Dirk; Schroeder, Michael
2009-01-01
Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs) with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs) were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs) representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS). PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that relatively little structural information would be sufficient for finding complexes involving most of the proteins and interactions in a typical PPIN. PMID:19558694
Neural mechanisms of reinforcement learning in unmedicated patients with major depressive disorder.
Rothkirch, Marcus; Tonn, Jonas; Köhler, Stephan; Sterzer, Philipp
2017-04-01
According to current concepts, major depressive disorder is strongly related to dysfunctional neural processing of motivational information, entailing impairments in reinforcement learning. While computational modelling can reveal the precise nature of neural learning signals, it has not been used to study learning-related neural dysfunctions in unmedicated patients with major depressive disorder so far. We thus aimed at comparing the neural coding of reward and punishment prediction errors, representing indicators of neural learning-related processes, between unmedicated patients with major depressive disorder and healthy participants. To this end, a group of unmedicated patients with major depressive disorder (n = 28) and a group of age- and sex-matched healthy control participants (n = 30) completed an instrumental learning task involving monetary gains and losses during functional magnetic resonance imaging. The two groups did not differ in their learning performance. Patients and control participants showed the same level of prediction error-related activity in the ventral striatum and the anterior insula. In contrast, neural coding of reward prediction errors in the medial orbitofrontal cortex was reduced in patients. Moreover, neural reward prediction error signals in the medial orbitofrontal cortex and ventral striatum showed negative correlations with anhedonia severity. Using a standard instrumental learning paradigm we found no evidence for an overall impairment of reinforcement learning in medication-free patients with major depressive disorder. Importantly, however, the attenuated neural coding of reward in the medial orbitofrontal cortex and the relation between anhedonia and reduced reward prediction error-signalling in the medial orbitofrontal cortex and ventral striatum likely reflect an impairment in experiencing pleasure from rewarding events as a key mechanism of anhedonia in major depressive disorder. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Nurses' behaviors and visual scanning patterns may reduce patient identification errors.
Marquard, Jenna L; Henneman, Philip L; He, Ze; Jo, Junghee; Fisher, Donald L; Henneman, Elizabeth A
2011-09-01
Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20) administered medications to 3 patients in a simulated clinical setting, with 1 patient having an embedded ID error. Error-identifying nurses tended to complete more process steps in a similar amount of time than non-error-identifying nurses and tended to scan information across artifacts (e.g., ID band, patient chart, medication label) rather than fixating on several pieces of information on a single artifact before fixating on another artifact. Non-error-indentifying nurses tended to increase their durations of off-topic conversations-a type of process interruption-over the course of the trials; the difference between groups was significant in the trial with the embedded ID error. Error-identifying nurses tended to have their most fixations in a row on the patient's chart, whereas non-error-identifying nurses did not tend to have a single artifact on which they consistently fixated. Finally, error-identifying nurses tended to have predictable eye fixation sequences across artifacts, whereas non-error-identifying nurses tended to have seemingly random eye fixation sequences. This finding has implications for nurse training and the design of tools and technologies that support nurses as they complete the medication administration process. (c) 2011 APA, all rights reserved.
Impact of cell size on inventory and mapping errors in a cellular geographic information system
NASA Technical Reports Server (NTRS)
Wehde, M. E. (Principal Investigator)
1979-01-01
The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
PREVAIL: Predicting Recovery through Estimation and Visualization of Active and Incident Lesions.
Dworkin, Jordan D; Sweeney, Elizabeth M; Schindler, Matthew K; Chahin, Salim; Reich, Daniel S; Shinohara, Russell T
2016-01-01
The goal of this study was to develop a model that integrates imaging and clinical information observed at lesion incidence for predicting the recovery of white matter lesions in multiple sclerosis (MS) patients. Demographic, clinical, and magnetic resonance imaging (MRI) data were obtained from 60 subjects with MS as part of a natural history study at the National Institute of Neurological Disorders and Stroke. A total of 401 lesions met the inclusion criteria and were used in the study. Imaging features were extracted from the intensity-normalized T1-weighted (T1w) and T2-weighted sequences as well as magnetization transfer ratio (MTR) sequence acquired at lesion incidence. T1w and MTR signatures were also extracted from images acquired one-year post-incidence. Imaging features were integrated with clinical and demographic data observed at lesion incidence to create statistical prediction models for long-term damage within the lesion. The performance of the T1w and MTR predictions was assessed in two ways: first, the predictive accuracy was measured quantitatively using leave-one-lesion-out cross-validated (CV) mean-squared predictive error. Then, to assess the prediction performance from the perspective of expert clinicians, three board-certified MS clinicians were asked to individually score how similar the CV model-predicted one-year appearance was to the true one-year appearance for a random sample of 100 lesions. The cross-validated root-mean-square predictive error was 0.95 for normalized T1w and 0.064 for MTR, compared to the estimated measurement errors of 0.48 and 0.078 respectively. The three expert raters agreed that T1w and MTR predictions closely resembled the true one-year follow-up appearance of the lesions in both degree and pattern of recovery within lesions. This study demonstrates that by using only information from a single visit at incidence, we can predict how a new lesion will recover using relatively simple statistical techniques. The potential to visualize the likely course of recovery has implications for clinical decision-making, as well as trial enrichment.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Model-based learning and the contribution of the orbitofrontal cortex to the model-free world
McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey
2012-01-01
Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030
A novel auto-tuning PID control mechanism for nonlinear systems.
Cetin, Meric; Iplikci, Serdar
2015-09-01
In this paper, a novel Runge-Kutta (RK) discretization-based model-predictive auto-tuning proportional-integral-derivative controller (RK-PID) is introduced for the control of continuous-time nonlinear systems. The parameters of the PID controller are tuned using RK model of the system through prediction error-square minimization where the predicted information of tracking error provides an enhanced tuning of the parameters. Based on the model-predictive control (MPC) approach, the proposed mechanism provides necessary PID parameter adaptations while generating additive correction terms to assist the initially inadequate PID controller. Efficiency of the proposed mechanism has been tested on two experimental real-time systems: an unstable single-input single-output (SISO) nonlinear magnetic-levitation system and a nonlinear multi-input multi-output (MIMO) liquid-level system. RK-PID has been compared to standard PID, standard nonlinear MPC (NMPC), RK-MPC and conventional sliding-mode control (SMC) methods in terms of control performance, robustness, computational complexity and design issue. The proposed mechanism exhibits acceptable tuning and control performance with very small steady-state tracking errors, and provides very short settling time for parameter convergence. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X.; Wilcox, G.L.
1993-12-31
We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
Rieger, Martina; Bart, Victoria K. E.
2016-01-01
We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing. PMID:28018256
Rieger, Martina; Bart, Victoria K E
2016-01-01
We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing.
Quantifying Information Gain from Dynamic Downscaling Experiments
NASA Astrophysics Data System (ADS)
Tian, Y.; Peters-Lidard, C. D.
2015-12-01
Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.
Frequency, probability, and prediction: easy solutions to cognitive illusions?
Griffin, D; Buehler, R
1999-02-01
Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical results imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic predictions about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based predictions are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in predictions of students' real-life projects. When aggregate and single-case predictions are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little predictive validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case predictions, though still optimistically biased. Results from studies of overconfidence in general knowledge and base rate neglect in categorical prediction underline a general conclusion. Frequentistic predictions made for sets of events are no more statistically sophisticated, nor more accurate, than predictions made for individual events using subjective probability. Copyright 1999 Academic Press.
Stereotype threat can reduce older adults' memory errors
Barber, Sarah J.; Mather, Mara
2014-01-01
Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment (Seibt & Förster, 2004). Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 & 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well. PMID:24131297
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren
2016-11-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-03-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
Water Level Prediction of Lake Cascade Mahakam Using Adaptive Neural Network Backpropagation (ANNBP)
NASA Astrophysics Data System (ADS)
Mislan; Gaffar, A. F. O.; Haviluddin; Puspitasari, N.
2018-04-01
A natural hazard information and flood events are indispensable as a form of prevention and improvement. One of the causes is flooding in the areas around the lake. Therefore, forecasting the surface of Lake water level to anticipate flooding is required. The purpose of this paper is implemented computational intelligence method namely Adaptive Neural Network Backpropagation (ANNBP) to forecasting the Lake Cascade Mahakam. Based on experiment, performance of ANNBP indicated that Lake water level prediction have been accurate by using mean square error (MSE) and mean absolute percentage error (MAPE). In other words, computational intelligence method can produce good accuracy. A hybrid and optimization of computational intelligence are focus in the future work.
A Role for the Lateral Dorsal Tegmentum in Memory and Decision Neural Circuitry
Redila, Van; Kinzel, Chantelle; Jo, Yong Sang; Puryear, Corey B.; Mizumori, Sheri J.Y.
2017-01-01
A role for the hippocampus in memory is clear, although the mechanism for its contribution remains a matter of debate. Converging evidence suggests that hippocampus evaluates the extent to which context-defining features of events occur as expected. The consequence of mismatches, or prediction error, signals from hippocampus is discussed in terms of its impact on neural circuitry that evaluates the significance of prediction errors: Ventral tegmental area (VTA) dopamine cells burst fire to rewards or cues that predict rewards (Schultz et al., 1997). Although the lateral dorsal tegmentum (LDTg) importantly controls dopamine cell burst firing (Lodge & Grace, 2006) the behavioral significance of the LDTg control is not known. Therefore, we evaluated LDTg functional activity as rats performed a spatial memory task that generates task-dependent reward codes in VTA (Jo et al., 2013; Puryear et al., 2010) and another VTA afferent, the pedunculopontine nucleus (PPTg, Norton et al., 2011). Reversible inactivation of the LDTg significantly impaired choice accuracy. LDTg neurons coded primarily egocentric information in the form of movement velocity, turning behaviors, and behaviors leading up to expected reward locations. A subset of the velocity-tuned LDTg cells also showed high frequency bursts shortly before or after reward encounters, after which they showed tonic elevated firing during consumption of small, but not large, rewards. Cells that fired before reward encounters showed stronger correlations with velocity as rats moved toward, rather than away from, rewarded sites. LDTg neural activity was more strongly regulated by egocentric behaviors than that observed for PPTg or VTA cells that were recorded by Puryear et al. and Norton et al. While PPTg activity was uniquely sensitive to ongoing sensory input, all three regions encoded reward magnitude (although in different ways), reward expectation, and reward encounters. Only VTA encoded reward prediction errors. LDTg may inform VTA about learned goal-directed movement that reflects the current motivational state, and this in turn may guide VTA determination of expected subjective goal values. When combined it is clear the LDTg and PPTg provide only a portion of the information that dopamine cells need to assess the value of prediction errors, a process that is essential to future adaptive decisions and switches of cognitive (i.e. memorial) strategies and behavioral responses. PMID:24910282
Marini, Francesco; Scott, Jerry; Aron, Adam R; Ester, Edward F
2017-07-01
Visual short-term memory (VSTM) enables the representation of information in a readily accessible state. VSTM is typically conceptualized as a form of "active" storage that is resistant to interference or disruption, yet several recent studies have shown that under some circumstances task-irrelevant distractors may indeed disrupt performance. Here, we investigated how task-irrelevant visual distractors affected VSTM by asking whether distractors induce a general loss of remembered information or selectively interfere with memory representations. In a VSTM task, participants recalled the spatial location of a target visual stimulus after a delay in which distractors were presented on 75% of trials. Notably, the distractor's eccentricity always matched the eccentricity of the target, while in the critical conditions the distractor's angular position was shifted either clockwise or counterclockwise relative to the target. We then computed estimates of recall error for both eccentricity and polar angle. A general interference model would predict an effect of distractors on both polar angle and eccentricity errors, while a selective interference model would predict effects of distractors on angle but not on eccentricity errors. Results showed that for stimulus angle there was an increase in the magnitude and variability of recall errors. However, distractors had no effect on estimates of stimulus eccentricity. Our results suggest that distractors selectively interfere with VSTM for spatial locations.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188
Smith, B; Hassen, A; Hinds, M; Rice, D; Jones, D; Sauber, T; Iiams, C; Sevenich, D; Allen, R; Owens, F; McNaughton, J; Parsons, C
2015-03-01
The DE values of corn grain for pigs will differ among corn sources. More accurate prediction of DE may improve diet formulation and reduce diet cost. Corn grain sources ( = 83) were assayed with growing swine (20 kg) in DE experiments with total collection of feces, with 3-wk-old broiler chick in nitrogen-corrected apparent ME (AME) trials and with cecectomized adult roosters in nitrogen-corrected true ME (TME) studies. Additional AME data for the corn grain source set was generated based on an existing near-infrared transmittance prediction model (near-infrared transmittance-predicted AME [NIT-AME]). Corn source nutrient composition was determined by wet chemistry methods. These data were then used to 1) test the accuracy of predicting swine DE of individual corn sources based on available literature equations and nutrient composition and 2) develop models for predicting DE of sources from nutrient composition and the cross-species information gathered above (AME, NIT-AME, and TME). The overall measured DE, AME, NIT-AME, and TME values were 4,105 ± 11, 4,006 ± 10, 4,004 ± 10, and 4,086 ± 12 kcal/kg DM, respectively. Prediction models were developed using 80% of the corn grain sources; the remaining 20% was reserved for validation of the developed prediction equation. Literature equations based on nutrient composition proved imprecise for predicting corn DE; the root mean square error of prediction ranged from 105 to 331 kcal/kg, an equivalent of 2.6 to 8.8% error. Yet among the corn composition traits, 4-variable models developed in the current study provided adequate prediction of DE (model ranging from 0.76 to 0.79 and root mean square error [RMSE] of 50 kcal/kg). When prediction equations were tested using the validation set, these models had a 1 to 1.2% error of prediction. Simple linear equations from AME, NIT-AME, or TME provided an accurate prediction of DE for individual sources ( ranged from 0.65 to 0.73 and RMSE ranged from 50 to 61 kcal/kg). Percentage error of prediction based on the validation data set was greater (1.4%) for the TME model than for the NIT-AME or AME models (1 and 1.2%, respectively), indicating that swine DE values could be accurately predicted by using AME or NIT-AME. In conclusion, regression equations developed from broiler measurements or from analyzed nutrient composition proved adequate to reliably predict the DE of commercially available corn hybrids for growing pigs.
NASA Astrophysics Data System (ADS)
Declair, Stefan; Saint-Drenan, Yves-Marie; Potthast, Roland
2016-04-01
Determining the amount of weather dependent renewable energy is a demanding task for transmission system operators (TSOs) and wind and photovoltaic (PV) prediction errors require the use of reserve power, which generate costs and can - in extreme cases - endanger the security of supply. In the project EWeLiNE funded by the German government, the German Weather Service and the Fraunhofer Institute on Wind Energy and Energy System Technology develop innovative weather- and power forecasting models and tools for grid integration of weather dependent renewable energy. The key part in energy prediction process chains is the numerical weather prediction (NWP) system. Wind speed and irradiation forecast from NWP system are however subject to several sources of error. The quality of the wind power prediction is mainly penalized by forecast error of the NWP model in the planetary boundary layer (PBL), which is characterized by high spatial and temporal fluctuations of the wind speed. For PV power prediction, weaknesses of the NWP model to correctly forecast i.e. low stratus, the absorption of condensed water or aerosol optical depth are the main sources of errors. Inaccurate radiation schemes (i.e. the two-stream parametrization) are also known as a deficit of NWP systems with regard to irradiation forecast. To mitigate errors like these, NWP model data can be corrected by post-processing techniques such as model output statistics and calibration using historical observational data. Additionally, latest observations can be used in a pre-processing technique called data assimilation (DA). In DA, not only the initial fields are provided, but the model is also synchronized with reality - the observations - and hence the model error is reduced in the forecast. Besides conventional observation networks like radiosondes, synoptic observations or air reports of wind, pressure and humidity, the number of observations measuring meteorological information indirectly such as satellite radiances, radar reflectivities or GPS slant delays strongly increases. The numerous wind farm and PV plants installed in Germany potentially represent a dense meteorological network assessing irradiation and wind speed through their power measurements. The accuracy of the NWP data may thus be enhanced by extending the observations in the assimilation by this new source of information. Wind power data can serve as indirect measurements of wind speed at hub height. The impact on the NWP model is potentially interesting since conventional observation network lacks measurements in this part of the PBL. Photovoltaic power plants can provide information on clouds, aerosol optical depth or low stratus in terms of remote sensing: the power output is strongly dependent on perturbations along the slant between sun position and PV panel. Additionally, since the latter kind of data is not limited to the vertical column above or below the detector. It may thus complement satellite data and compensate weaknesses in the radiation scheme. In this contribution, the DA method (Local Ensemble Transform Kalman Filter, LETKF) is shortly sketched. Furthermore, the computation of the model power equivalents is described and first assimilation results are presented and discussed.
Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel
2018-06-19
This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.
NASA Technical Reports Server (NTRS)
Morey, Susan; Prevot, Thomas; Mercer, Joey; Martin, Lynne; Bienert, Nancy; Cabrall, Christopher; Hunt, Sarah; Homola, Jeffrey; Kraut, Joshua
2013-01-01
A human-in-the-loop simulation was conducted to examine the effects of varying levels of trajectory prediction uncertainty on air traffic controller workload and performance, as well as how strategies and the use of decision support tools change in response. This paper focuses on the strategies employed by two controllers from separate teams who worked in parallel but independently under identical conditions (airspace, arrival traffic, tools) with the goal of ensuring schedule conformance and safe separation for a dense arrival flow in en route airspace. Despite differences in strategy and methods, both controllers achieved high levels of schedule conformance and safe separation. Overall, results show that trajectory uncertainties introduced by wind and aircraft performance prediction errors do not affect the controllers' ability to manage traffic. Controller strategies were fairly robust to changes in error, though strategies were affected by the amount of delay to absorb (scheduled time of arrival minus estimated time of arrival). Using the results and observations, this paper proposes an ability to dynamically customize the display of information including delay time based on observed error to better accommodate different strategies and objectives.
Towards a general theory of neural computation based on prediction by single neurons.
Fiorillo, Christopher D
2008-10-01
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.
Tonkin, M.J.; Hill, Mary C.; Doherty, John
2003-01-01
This document describes the MOD-PREDICT program, which helps evaluate userdefined sets of observations, prior information, and predictions, using the ground-water model MODFLOW-2000. MOD-PREDICT takes advantage of the existing Observation and Sensitivity Processes (Hill and others, 2000) by initiating runs of MODFLOW-2000 and using the output files produced. The names and formats of the MODFLOW-2000 input files are unchanged, such that full backward compatibility is maintained. A new name file and input files are required for MOD-PREDICT. The performance of MOD-PREDICT has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program using the email address available at the web address below. Updates might occasionally be made to this document, to the MOD-PREDICT program, and to MODFLOW- 2000. Users can check for updates on the Internet at URL http://water.usgs.gov/software/ground water.html/.
Risk prediction and aversion by anterior cingulate cortex.
Brown, Joshua W; Braver, Todd S
2007-12-01
The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron
2017-05-01
This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.
A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling
NASA Astrophysics Data System (ADS)
Chen, L.; Gong, Y.; Shen, Z.
2015-11-01
Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.
Zhang, Ji-Li; Liu, Bo-Fei; Chu, Teng-Fei; Di, Xue-Ying; Jin, Sen
2012-06-01
A laboratory burning experiment was conducted to measure the fire spread speed, residual time, reaction intensity, fireline intensity, and flame length of the ground surface fuels collected from a Korean pine (Pinus koraiensis) and Mongolian oak (Quercus mongolica) mixed stand in Maoer Mountains of Northeast China under the conditions of no wind, zero slope, and different moisture content, load, and mixture ratio of the fuels. The results measured were compared with those predicted by the extended Rothermel model to test the performance of the model, especially for the effects of two different weighting methods on the fire behavior modeling of the mixed fuels. With the prediction of the model, the mean absolute errors of the fire spread speed and reaction intensity of the fuels were 0.04 m X min(-1) and 77 kW X m(-2), their mean relative errors were 16% and 22%, while the mean absolute errors of residual time, fireline intensity and flame length were 15.5 s, 17.3 kW X m(-1), and 9.7 cm, and their mean relative errors were 55.5%, 48.7%, and 24%, respectively, indicating that the predicted values of residual time, fireline intensity, and flame length were lower than the observed ones. These errors could be regarded as the lower limits for the application of the extended Rothermel model in predicting the fire behavior of similar fuel types, and provide valuable information for using the model to predict the fire behavior under the similar field conditions. As a whole, the two different weighting methods did not show significant difference in predicting the fire behavior of the mixed fuels by extended Rothermel model. When the proportion of Korean pine fuels was lower, the predicted values of spread speed and reaction intensity obtained by surface area weighting method and those of fireline intensity and flame length obtained by load weighting method were higher; when the proportion of Korean pine needles was higher, the contrary results were obtained.
Gust prediction via artificial hair sensor array and neural network
NASA Astrophysics Data System (ADS)
Pankonien, Alexander M.; Thapa Magar, Kaman S.; Beblo, Richard V.; Reich, Gregory W.
2017-04-01
Gust Load Alleviation (GLA) is an important aspect of flight dynamics and control that reduces structural loadings and enhances ride quality. In conventional GLA systems, the structural response to aerodynamic excitation informs the control scheme. A phase lag, imposed by inertia, between the excitation and the measurement inherently limits the effectiveness of these systems. Hence, direct measurement of the aerodynamic loading can eliminate this lag, providing valuable information for effective GLA system design. Distributed arrays of Artificial Hair Sensors (AHS) are ideal for surface flow measurements that can be used to predict other necessary parameters such as aerodynamic forces, moments, and turbulence. In previous work, the spatially distributed surface flow velocities obtained from an array of artificial hair sensors using a Single-State (or feedforward) Neural Network were found to be effective in estimating the steady aerodynamic parameters such as air speed, angle of attack, lift and moment coefficient. This paper extends the investigation of the same configuration to unsteady force and moment estimation, which is important for active GLA control design. Implementing a Recurrent Neural Network that includes previous-timestep sensor information, the hair sensor array is shown to be capable of capturing gust disturbances with a wide range of periods, reducing predictive error in lift and moment by 68% and 52% respectively. The L2 norms of the first layer of the weight matrices were compared showing a 23% emphasis on prior versus current information. The Recurrent architecture also improves robustness, exhibiting only a 30% increase in predictive error when undertrained as compared to a 170% increase by the Single-State NN. This diverse, localized information can thus be directly implemented into a control scheme that alleviates the gusts without waiting for a structural response or requiring user-intensive sensor calibration.
NASA Astrophysics Data System (ADS)
Coyne, Kevin Anthony
The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.
Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T
2016-03-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting
Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.
2016-01-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
Dental age estimation in Japanese individuals combining permanent teeth and third molars.
Ramanan, Namratha; Thevissen, Patrick; Fleuws, Steffen; Willems, G
2012-12-01
The study aim was, firstly, to verify the Willems et al. model on a Japanese reference sample. Secondly to develop a Japanese reference model based on the Willems et al. method and to verify it. Thirdly to analyze the age prediction performance adding tooth development information of third molars to permanent teeth. Retrospectively 1877 panoramic radiographs were selected in the age range between 1 and 23 years (1248 children, 629 sub-adults). Dental development was registered applying Demirjian 's stages of the mandibular left permanent teeth in children and Köhler stages on the third molars. The children's data were, firstly, used to validate the Willems et al. model (developed a Belgian reference sample), secondly, split ino a training and a test sample. On the training sample a Japanese reference model was developed based on the Willems method. The developed model and the Willems et al; model were verified on the test sample. Regression analysis was used to detect the age prediction performance adding third molar scores to permanent tooth scores. The validated Willems et al. model provided a mean absolute error of 0.85 and 0.75 years in females and males, respectively. The mean absolute error in the verified Willems et al. and the developed Japanese reference model was 0.85, 0.77 and 0.79, 0.75 years in females and males, respectively. On average a negligible change in root mean square error values was detected adding third molar scores to permanent teeth scores. The Belgian sample could be used as a reference model to estimate the age of the Japanese individuals. Combining information from the third molars and permanent teeth was not providing clinically significant improvement of age predictions based on permanent teeth information alone.
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.
Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning
Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred
2017-01-01
Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809
Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping
2013-01-01
Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2017-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes need further study and additional data constraints. This can be used to inform future experimental design and in turn can provide an informative starting point for data assimilation.
Stevens, Antoine; Nocita, Marco; Tóth, Gergely; Montanarella, Luca; van Wesemael, Bas
2013-01-01
Soil organic carbon is a key soil property related to soil fertility, aggregate stability and the exchange of CO2 with the atmosphere. Existing soil maps and inventories can rarely be used to monitor the state and evolution in soil organic carbon content due to their poor spatial resolution, lack of consistency and high updating costs. Visible and Near Infrared diffuse reflectance spectroscopy is an alternative method to provide cheap and high-density soil data. However, there are still some uncertainties on its capacity to produce reliable predictions for areas characterized by large soil diversity. Using a large-scale EU soil survey of about 20,000 samples and covering 23 countries, we assessed the performance of reflectance spectroscopy for the prediction of soil organic carbon content. The best calibrations achieved a root mean square error ranging from 4 to 15 g C kg(-1) for mineral soils and a root mean square error of 50 g C kg(-1) for organic soil materials. Model errors are shown to be related to the levels of soil organic carbon and variations in other soil properties such as sand and clay content. Although errors are ∼5 times larger than the reproducibility error of the laboratory method, reflectance spectroscopy provides unbiased predictions of the soil organic carbon content. Such estimates could be used for assessing the mean soil organic carbon content of large geographical entities or countries. This study is a first step towards providing uniform continental-scale spectroscopic estimations of soil organic carbon, meeting an increasing demand for information on the state of the soil that can be used in biogeochemical models and the monitoring of soil degradation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less
Stevens, Antoine; Nocita, Marco; Tóth, Gergely; Montanarella, Luca; van Wesemael, Bas
2013-01-01
Soil organic carbon is a key soil property related to soil fertility, aggregate stability and the exchange of CO2 with the atmosphere. Existing soil maps and inventories can rarely be used to monitor the state and evolution in soil organic carbon content due to their poor spatial resolution, lack of consistency and high updating costs. Visible and Near Infrared diffuse reflectance spectroscopy is an alternative method to provide cheap and high-density soil data. However, there are still some uncertainties on its capacity to produce reliable predictions for areas characterized by large soil diversity. Using a large-scale EU soil survey of about 20,000 samples and covering 23 countries, we assessed the performance of reflectance spectroscopy for the prediction of soil organic carbon content. The best calibrations achieved a root mean square error ranging from 4 to 15 g C kg−1 for mineral soils and a root mean square error of 50 g C kg−1 for organic soil materials. Model errors are shown to be related to the levels of soil organic carbon and variations in other soil properties such as sand and clay content. Although errors are ∼5 times larger than the reproducibility error of the laboratory method, reflectance spectroscopy provides unbiased predictions of the soil organic carbon content. Such estimates could be used for assessing the mean soil organic carbon content of large geographical entities or countries. This study is a first step towards providing uniform continental-scale spectroscopic estimations of soil organic carbon, meeting an increasing demand for information on the state of the soil that can be used in biogeochemical models and the monitoring of soil degradation. PMID:23840459
NASA Astrophysics Data System (ADS)
Wormanns, Dag; Beyer, Florian; Hoffknecht, Petra; Dicken, Volker; Kuhnigk, Jan-Martin; Lange, Tobias; Thomas, Michael; Heindel, Walter
2005-04-01
This study was aimed to evaluate a morphology-based approach for prediction of postoperative forced expiratory volume in one second (FEV1) after lung resection from preoperative CT scans. Fifteen Patients with surgically treated (lobectomy or pneumonectomy) bronchogenic carcinoma were enrolled in the study. A preoperative chest CT and pulmonary function tests before and after surgery were performed. CT scans were analyzed by prototype software: automated segmentation and volumetry of lung lobes was performed with minimal user interaction. Determined volumes of different lung lobes were used to predict postoperative FEV1 as percentage of the preoperative values. Predicted FEV1 values were compared to the observed postoperative values as standard of reference. Patients underwent lobectomy in twelve cases (6 upper lobes; 1 middle lobe; 5 lower lobes; 6 right side; 6 left side) and pneumonectomy in three cases. Automated calculation of predicted postoperative lung function was successful in all cases. Predicted FEV1 ranged from 54% to 95% (mean 75% +/- 11%) of the preoperative values. Two cases with obviously erroneous LFT were excluded from analysis. Mean error of predicted FEV1 was 20 +/- 160 ml, indicating absence of systematic error; mean absolute error was 7.4 +/- 3.3% respective 137 +/- 77 ml/s. The 200 ml reproducibility criterion for FEV1 was met in 11 of 13 cases (85%). In conclusion, software-assisted prediction of postoperative lung function yielded a clinically acceptable agreement with the observed postoperative values. This method might add useful information for evaluation of functional operability of patients with lung cancer.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim
2015-01-01
Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
Computational substrates of norms and their violations during social exchange.
Xiang, Ting; Lohrenz, Terry; Montague, P Read
2013-01-16
Social norms in humans constrain individual behaviors to establish shared expectations within a social group. Previous work has probed social norm violations and the feelings that such violations engender; however, a computational rendering of the underlying neural and emotional responses has been lacking. We probed norm violations using a two-party, repeated fairness game (ultimatum game) where proposers offer a split of a monetary resource to a responder who either accepts or rejects the offer. Using a norm-training paradigm where subject groups are preadapted to either high or low offers, we demonstrate that unpredictable shifts in expected offers creates a difference in rejection rates exhibited by the two responder groups for otherwise identical offers. We constructed an ideal observer model that identified neural correlates of norm prediction errors in the ventral striatum and anterior insula, regions that also showed strong responses to variance-prediction errors generated by the same model. Subjective feelings about offers correlated with these norm prediction errors, and the two signals displayed overlapping, but not identical, neural correlates in striatum, insula, and medial orbitofrontal cortex. These results provide evidence for the hypothesis that responses in anterior insula can encode information about social norm violations that correlate with changes in overt behavior (changes in rejection rates). Together, these results demonstrate that the brain regions involved in reward prediction and risk prediction are also recruited in signaling social norm violations.
Computational Substrates of Norms and Their Violations during Social Exchange
Xiang, Ting; Lohrenz, Terry; Montague, P. Read
2013-01-01
Social norms in humans constrain individual behaviors to establish shared expectations within a social group. Previous work has probed social norm violations and the feelings that such violations engender; however, a computational rendering of the underlying neural and emotional responses has been lacking. We probed norm violations using a two-party, repeated fairness game (ultimatum game) where proposers offer a split of a monetary resource to a responder who either accepts or rejects the offer. Using a norm-training paradigm where subject groups are preadapted to either high or low offers, we demonstrate that unpredictable shifts in expected offers creates a difference in rejection rates exhibited by the two responder groups for otherwise identical offers. We constructed an ideal observer model that identified neural correlates of norm prediction errors in the ventral striatum and anterior insula, regions that also showed strong responses to variance-prediction errors generated by the same model. Subjective feelings about offers correlated with these norm prediction errors, and the two signals displayed overlapping, but not identical, neural correlates in striatum, insula, and medial orbitofrontal cortex. These results provide evidence for the hypothesis that responses in anterior insula can encode information about social norm violations that correlate with changes in overt behavior (changes in rejection rates). Together, these results demonstrate that the brain regions involved in reward prediction and risk prediction are also recruited in signaling social norm violations. PMID:23325247
Cole, Sindy; McNally, Gavan P
2007-10-01
Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
Takahashi, Yuji K.; Langdon, Angela J.; Niv, Yael; Schoenbaum, Geoffrey
2016-01-01
Summary Dopamine neurons signal reward prediction errors. This requires accurate reward predictions. It has been suggested that the ventral striatum provides these predictions. Here we tested this hypothesis by recording from putative dopamine neurons in the VTA of rats performing a task in which prediction errors were induced by shifting reward timing or number. In controls, the neurons exhibited error signals in response to both manipulations. However, dopamine neurons in rats with ipsilateral ventral striatal lesions exhibited errors only to changes in number and failed to respond to changes in timing of reward. These results, supported by computational modeling, indicate that predictions about the temporal specificity and the number of expected rewards are dissociable, and that dopaminergic prediction-error signals rely on the ventral striatum for the former but not the latter. PMID:27292535
NASA Technical Reports Server (NTRS)
Cane, M. A.; Cardone, V. J.; Halem, M.; Halberstam, I.
1981-01-01
The reported investigation has the objective to assess the potential impact on numerical weather prediction (NWP) of remotely sensed surface wind data. Other investigations conducted with similar objectives have not been satisfactory in connection with a use of procedures providing an unrealistic distribution of initial errors. In the current study, care has been taken to duplicate the actual distribution of information in the conventional observing system, thus shifting the emphasis from accuracy of the data to the data coverage. It is pointed out that this is an important consideration in assessing satellite observing systems since experience with sounder data has shown that improvements in forecasts due to satellite-derived information is due less to a general error reduction than to the ability to fill data-sparse regions. The reported study concentrates on the evaluation of the observing system simulation experimental design and on the assessment of the potential of remotely sensed marine surface wind data.
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger
2012-09-01
Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.
Trotta-Moreu, Nuria; Lobo, Jorge M
2010-02-01
Predictions from individual distribution models for Mexican Geotrupinae species were overlaid to obtain a total species richness map for this group. A database (GEOMEX) that compiles available information from the literature and from several entomological collections was used. A Maximum Entropy method (MaxEnt) was applied to estimate the distribution of each species, taking into account 19 climatic variables as predictors. For each species, suitability values ranging from 0 to 100 were calculated for each grid cell on the map, and 21 different thresholds were used to convert these continuous suitability values into binary ones (presence-absence). By summing all of the individual binary maps, we generated a species richness prediction for each of the considered thresholds. The number of species and faunal composition thus predicted for each Mexican state were subsequently compared with those observed in a preselected set of well-surveyed states. Our results indicate that the sum of individual predictions tends to overestimate species richness but that the selection of an appropriate threshold can reduce this bias. Even under the most optimistic prediction threshold, the mean species richness error is 61% of the observed species richness, with commission errors being significantly more common than omission errors (71 +/- 29 versus 18 +/- 10%). The estimated distribution of Geotrupinae species richness in Mexico in discussed, although our conclusions are preliminary and contingent on the scarce and probably biased available data.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Sutherland, Shelbie L; Cimpian, Andrei
2015-08-01
Several proposals in the literature on conceptual development converge on the claim that information about kinds of things in the world has a privileged status in children's cognition, insofar as it is acquired, manipulated, and stored with surprising ease. Our goal in the present studies (N = 440) was to test a prediction of this claim. Specifically, if the early cognitive system privileges kind (or generic) information in the proposed ways, then learning new facts about kinds should be so seamless that it is often accompanied by an impression that these facts were known all along. To test this prediction, we presented 4- to 7-year-old children with novel kind-wide and individual-specific facts, and we then asked children whether they had prior knowledge of these facts. As predicted, children were under the impression that they had known the kind-wide facts more often than the individual-specific facts, even though in reality they had just learned both (Experiments 1, 2, 3, and 5). Importantly, learning facts about (nongeneric) plural sets of individuals was not similarly accompanied by heightened knew-it-all-along errors (Experiment 4), highlighting the privileged status of kind information per se. Finally, we found that young children were able to correctly recognize their previous ignorance of newly learned generic facts when this ignorance was made salient before the learning event (Experiment 6), suggesting that children's frequent knew-it-all-along impressions about such facts truly stem from metacognitive difficulties rather than being a methodological artifact. In sum, these 6 studies indicate that learning information about kinds is accompanied by heightened knew-it-all-along errors. More broadly, this evidence supports the view that early cognition privileges kind representations. (c) 2015 APA, all rights reserved).
Interactions of timing and prediction error learning.
Kirkpatrick, Kimberly
2014-01-01
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.
Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael
2013-03-27
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.
A General Method for Predicting Amino Acid Residues Experiencing Hydrogen Exchange
Wang, Boshen; Perez-Rathke, Alan; Li, Renhao; Liang, Jie
2018-01-01
Information on protein hydrogen exchange can help delineate key regions involved in protein-protein interactions and provides important insight towards determining functional roles of genetic variants and their possible mechanisms in disease processes. Previous studies have shown that the degree of hydrogen exchange is affected by hydrogen bond formations, solvent accessibility, proximity to other residues, and experimental conditions. However, a general predictive method for identifying residues capable of hydrogen exchange transferable to a broad set of proteins is lacking. We have developed a machine learning method based on random forest that can predict whether a residue experiences hydrogen exchange. Using data from the Start2Fold database, which contains information on 13,306 residues (3,790 of which experience hydrogen exchange and 9,516 which do not exchange), our method achieves good performance. Specifically, we achieve an overall out-of-bag (OOB) error, an unbiased estimate of the test set error, of 20.3 percent. Using a randomly selected test data set consisting of 500 residues experiencing hydrogen exchange and 500 which do not, our method achieves an accuracy of 0.79, a recall of 0.74, a precision of 0.82, and an F1 score of 0.78.
NASA Astrophysics Data System (ADS)
Luitel, Beda; Villarini, Gabriele; Vecchi, Gabriel A.
2018-01-01
The goal of this study is the evaluation of the skill of five state-of-the-art numerical weather prediction (NWP) systems [European Centre for Medium-Range Weather Forecasts (ECMWF), UK Met Office (UKMO), National Centers for Environmental Prediction (NCEP), China Meteorological Administration (CMA), and Canadian Meteorological Center (CMC)] in forecasting rainfall from North Atlantic tropical cyclones (TCs). Analyses focus on 15 North Atlantic TCs that made landfall along the U.S. coast over the 2007-2012 period. As reference data we use gridded rainfall provided by the Climate Prediction Center (CPC). We consider forecast lead-times up to five days. To benchmark the skill of these models, we consider rainfall estimates from one radar-based (Stage IV) and four satellite-based [Tropical Rainfall Measuring Mission - Multi-satellite Precipitation Analysis (TMPA, both real-time and research version); Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN); the CPC MORPHing Technique (CMORPH)] rainfall products. Daily and storm total rainfall fields from each of these remote sensing products are compared to the reference data to obtain information about the range of errors we can expect from "observational data." The skill of the NWP models is quantified: (1) by visual examination of the distribution of the errors in storm total rainfall for the different lead-times, and numerical examination of the first three moments of the error distribution; (2) relative to climatology at the daily scale. Considering these skill metrics, we conclude that the NWP models can provide skillful forecasts of TC rainfall with lead-times up to 48 h, without a consistently best or worst NWP model.
Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping
2013-01-01
Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C. PMID:24312497
Plant traits determine forest flammability
NASA Astrophysics Data System (ADS)
Zylstra, Philip; Bradstock, Ross
2016-04-01
Carbon and nutrient cycles in forest ecosystems are influenced by their inherent flammability - a property determined by the traits of the component plant species that form the fuel and influence the micro climate of a fire. In the absence of a model capable of explaining the complexity of such a system however, flammability is frequently represented by simple metrics such as surface fuel load. The implications of modelling fire - flammability feedbacks using surface fuel load were examined and compared to a biophysical, mechanistic model (Forest Flammability Model) that incorporates the influence of structural plant traits (e.g. crown shape and spacing) and leaf traits (e.g. thickness, dimensions and moisture). Fuels burn with values of combustibility modelled from leaf traits, transferring convective heat along vectors defined by flame angle and with plume temperatures that decrease with distance from the flame. Flames are re-calculated in one-second time-steps, with new leaves within the plant, neighbouring plants or higher strata ignited when the modelled time to ignition is reached, and other leaves extinguishing when their modelled flame duration is exceeded. The relative influence of surface fuels, vegetation structure and plant leaf traits were examined by comparing flame heights modelled using three treatments that successively added these components within the FFM. Validation was performed across a diverse range of eucalypt forests burnt under widely varying conditions during a forest fire in the Brindabella Ranges west of Canberra (ACT) in 2003. Flame heights ranged from 10 cm to more than 20 m, with an average of 4 m. When modelled from surface fuels alone, flame heights were on average 1.5m smaller than observed values, and were predicted within the error range 28% of the time. The addition of plant structure produced predicted flame heights that were on average 1.5m larger than observed, but were correct 53% of the time. The over-prediction in this case was the result of a small number of large errors, where higher strata such as forest canopy were modelled to ignite but did not. The addition of leaf traits largely addressed this error, so that the mean flame height over-prediction was reduced to 0.3m and the fully parameterised FFM gave correct predictions 62% of the time. When small (<1m) flames were excluded, the fully parameterised model correctly predicted flame heights 12 times more often than could be predicted using surface fuels alone, and the Mean Absolute Error was 4 times smaller. The inadequate consideration of plant traits within a mechanistic framework introduces significant error to forest fire behaviour modelling. The FFM provides a solution to this, and an avenue by which plant trait information can be used to better inform Global Vegetation Models and decision-making tools used to mitigate the impacts of fire.
The role of prediction in social neuroscience
Brown, Elliot C.; Brüne, Martin
2012-01-01
Research has shown that the brain is constantly making predictions about future events. Theories of prediction in perception, action and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error. Forward models of action and perception propose the generation of a predictive internal representation of the expected sensory outcome, which is matched to the actual sensory feedback. Shared neural representations have been found when experiencing one's own and observing other's actions, rewards, errors, and emotions such as fear and pain. These general principles of the “predictive brain” are well established and have already begun to be applied to social aspects of cognition. The application and relevance of these predictive principles to social cognition are discussed in this article. Evidence is presented to argue that simple non-social cognitive processes can be extended to explain complex cognitive processes required for social interaction, with common neural activity seen for both social and non-social cognitions. A number of studies are included which demonstrate that bottom-up sensory input and top-down expectancies can be modulated by social information. The concept of competing social forward models and a partially distinct category of social prediction errors are introduced. The evolutionary implications of a “social predictive brain” are also mentioned, along with the implications on psychopathology. The review presents a number of testable hypotheses and novel comparisons that aim to stimulate further discussion and integration between currently disparate fields of research, with regard to computational models, behavioral and neurophysiological data. This promotes a relatively new platform for inquiry in social neuroscience with implications in social learning, theory of mind, empathy, the evolution of the social brain, and potential strategies for treating social cognitive deficits. PMID:22654749
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study
Hosseinyalamdary, Siavash
2018-01-01
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.
Hosseinyalamdary, Siavash
2018-04-24
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
NASA Technical Reports Server (NTRS)
Miller, J. M.
1980-01-01
ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.
Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions
Corlett, P. R.; Murray, G. K.; Honey, G. D.; Aitken, M. R. F.; Shanks, D. R.; Robbins, T.W.; Bullmore, E.T.; Dickinson, A.; Fletcher, P. C.
2012-01-01
Delusions are maladaptive beliefs about the world. Based upon experimental evidence that prediction error—a mismatch between expectancy and outcome—drives belief formation, this study examined the possibility that delusions form because of disrupted prediction-error processing. We used fMRI to determine prediction-error-related brain responses in 12 healthy subjects and 12 individuals (7 males) with delusional beliefs. Frontal cortex responses in the patient group were suggestive of disrupted prediction-error processing. Furthermore, across subjects, the extent of disruption was significantly related to an individual’s propensity to delusion formation. Our results support a neurobiological theory of delusion formation that implicates aberrant prediction-error signalling, disrupted attentional allocation and associative learning in the formation of delusional beliefs. PMID:17690132
Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew
2013-01-01
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092
Aircraft noise prediction program user's manual
NASA Technical Reports Server (NTRS)
Gillian, R. E.
1982-01-01
The Aircraft Noise Prediction Program (ANOPP) predicts aircraft noise with the best methods available. This manual is designed to give the user an understanding of the capabilities of ANOPP and to show how to formulate problems and obtain solutions by using these capabilities. Sections within the manual document basic ANOPP concepts, ANOPP usage, ANOPP functional modules, ANOPP control statement procedure library, and ANOPP permanent data base. appendixes to the manual include information on preparing job decks for the operating systems in use, error diagnostics and recovery techniques, and a glossary of ANOPP terms.
ANOPP programmer's reference manual for the executive System. [aircraft noise prediction program
NASA Technical Reports Server (NTRS)
Gillian, R. E.; Brown, C. G.; Bartlett, R. W.; Baucom, P. H.
1977-01-01
Documentation for the Aircraft Noise Prediction Program as of release level 01/00/00 is presented in a manual designed for programmers having a need for understanding the internal design and logical concepts of the executive system software. Emphasis is placed on providing sufficient information to modify the system for enhancements or error correction. The ANOPP executive system includes software related to operating system interface, executive control, and data base management for the Aircraft Noise Prediction Program. It is written in Fortran IV for use on CDC Cyber series of computers.
Bouchez, A; Goffinet, B
1990-02-01
Selection indices can be used to predict one trait from information available on several traits in order to improve the prediction accuracy. Plant or animal breeders are interested in selecting only the best individuals, and need to compare the efficiency of different trait combinations in order to choose the index ensuring the best prediction quality for individual values. As the usual tools for index evaluation do not remain unbiased in all cases, we propose a robust way of evaluation by means of an estimator of the mean-square error of prediction (EMSEP). This estimator remains valid even when parameters are not known, as usually assumed, but are estimated. EMSEP is applied to the choice of an indirect multitrait selection index at the F5 generation of a classical breeding scheme for soybeans. Best predictions for precocity are obtained by means of indices using only part of the available information.
2016-10-01
Reports an error in "When Does Making Detailed Predictions Make Predictions Worse" by Theresa F. Kelly and Joseph P. Simmons ( Journal of Experimental Psychology: General , Advanced Online Publication, Aug 8, 2016, np). In the article, the symbols in Figure 2 were inadvertently altered in production. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-37952-001.) In this article, we investigate whether making detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be susceptible to the negative effect of making detailed predictions. PsycINFO Database Record (c) 2016 APA, all rights reserved
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Sleep and errors in a group of Australian hospital nurses at work and during the commute.
Dorrian, Jillian; Tolley, Carolyn; Lamond, Nicole; van den Heuvel, Cameron; Pincombe, Jan; Rogers, Ann E; Drew, Dawson
2008-09-01
There is a paucity of information regarding Australian nurses' sleep and fatigue levels, and whether they result in impairment. Forty-one Australian hospital nurses completed daily logbooks for one month recording work hours, sleep, sleepiness, stress, errors, near errors and observed errors (made by others). Nurses reported exhaustion, stress and struggling to remain (STR) awake at work during one in three shifts. Sleep was significantly reduced on workdays in general, and workdays when an error was reported relative to days off. The primary predictor of error was STR, followed by stress. The primary predictor of extreme drowsiness during the commute was also STR awake, followed by exhaustion, and consecutive shifts. In turn, STR awake was predicted by exhaustion, prior sleep and shift length. Findings highlight the need for further attention to these issues to optimise the safety of nurses and patients in our hospitals, and the community at large on our roads.
Frontal Theta Links Prediction Errors to Behavioral Adaptation in Reinforcement Learning
Cavanagh, James F.; Frank, Michael J.; Klein, Theresa J.; Allen, John J.B.
2009-01-01
Investigations into action monitoring have consistently detailed a fronto-central voltage deflection in the Event-Related Potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the Feedback Related Negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Medio-frontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations: with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice. PMID:19969093
Quantifying uncertainty in climate change science through empirical information theory.
Majda, Andrew J; Gershgorin, Boris
2010-08-24
Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.
Association between split selection instability and predictive error in survival trees.
Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T
2006-01-01
To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.
Recovering area-to-mass ratio of resident space objects through data mining
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-01-01
The area-to-mass ratio (AMR) of a resident space object (RSO) is an important parameter for improved space situation awareness capability due to its effect on the non-conservative forces including the atmosphere drag force and the solar radiation pressure force. However, information about AMR is often not provided in most space catalogs. The present paper investigates recovering the AMR information from the consistency error, which refers to the difference between the orbit predicted from an earlier estimate and the orbit estimated at the current epoch. A data mining technique, particularly the random forest (RF) method, is used to discover the relationship between the consistency error and the AMR. Using a simulation-based space catalog environment as the testbed, this paper demonstrates that the classification RF model can determine the RSO's category AMR and the regression RF model can generate continuous AMR values, both with good accuracies. Furthermore, the paper reveals that by recording additional information besides the consistency error, the RF model can estimate the AMR with even higher accuracy.
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
Artificial neural network implementation of a near-ideal error prediction controller
NASA Technical Reports Server (NTRS)
Mcvey, Eugene S.; Taylor, Lynore Denise
1992-01-01
A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error responses be known for a particular input and modeled plant. These responses are used in the error prediction controller. An analysis was done on the general dynamic behavior that results from including a digital error predictor in a control loop and these were compared to those including the near-ideal Neural Network error predictor. This analysis was done for a second and third order system.
Neonatal intensive care unit: predictive models for length of stay.
Bender, G J; Koestler, D; Ombao, H; McCourt, M; Alskinis, B; Rubin, L P; Padbury, J F
2013-02-01
Hospital length of stay (LOS) is important to administrators and families of neonates admitted to the neonatal intensive care unit (NICU). A prediction model for NICU LOS was developed using predictors birth weight, gestational age and two severity of illness tools, the score for neonatal acute physiology, perinatal extension (SNAPPE) and the morbidity assessment index for newborns (MAIN). Consecutive admissions (n=293) to a New England regional level III NICU were retrospectively collected. Multiple predictive models were compared for complexity and goodness-of-fit, coefficient of determination (R (2)) and predictive error. The optimal model was validated prospectively with consecutive admissions (n=615). Observed and expected LOS was compared. The MAIN models had best Akaike's information criterion, highest R (2) (0.786) and lowest predictive error. The best SNAPPE model underestimated LOS, with substantial variability, yet was fairly well calibrated by birthweight category. LOS was longer in the prospective cohort than the retrospective cohort, without differences in birth weight, gestational age, MAIN or SNAPPE. LOS prediction is improved by accounting for severity of illness in the first week of life, beyond factors known at birth. Prospective validation of both MAIN and SNAPPE models is warranted.
Blood glucose level prediction based on support vector regression using mobile platforms.
Reymann, Maximilian P; Dorschky, Eva; Groh, Benjamin H; Martindale, Christine; Blank, Peter; Eskofier, Bjoern M
2016-08-01
The correct treatment of diabetes is vital to a patient's health: Staying within defined blood glucose levels prevents dangerous short- and long-term effects on the body. Mobile devices informing patients about their future blood glucose levels could enable them to take counter-measures to prevent hypo or hyper periods. Previous work addressed this challenge by predicting the blood glucose levels using regression models. However, these approaches required a physiological model, representing the human body's response to insulin and glucose intake, or are not directly applicable to mobile platforms (smart phones, tablets). In this paper, we propose an algorithm for mobile platforms to predict blood glucose levels without the need for a physiological model. Using an online software simulator program, we trained a Support Vector Regression (SVR) model and exported the parameter settings to our mobile platform. The prediction accuracy of our mobile platform was evaluated with pre-recorded data of a type 1 diabetes patient. The blood glucose level was predicted with an error of 19 % compared to the true value. Considering the permitted error of commercially used devices of 15 %, our algorithm is the basis for further development of mobile prediction algorithms.
Hui, Shisheng; Chen, Lizhang; Liu, Fuqiang; Ouyang, Yanhao
2015-12-01
To establish multiple seasonal autoregressive integrated moving average model(ARIMA) according to mumps disease incidence in Hunan province, and to predict the mumps incidence from May 2015 to April 2016 in Hunan province by the model. The data were downloaded from "Disease Surveillance Information Reporting Management System" in China Information System for Disease Control and Prevention. The monthly incidence of mumps in Hunan province was collected from January 2004 to April 2015 according to the onset date, including clinical diagnosis and laboratory confirmed cases. The predictive analysis method was the ARIMA model in SPSS 18.0 software, the ARIMA model was established on the monthly incidence of mumps from January 2004 to April 2014, and the date from May 2014 to April 2015 was used as the testing sample, Box-Ljung Q test was used to test the residual of the selected model. Finally, the monthly incidence of mumps from May 2015 to April 2016 was predicted by the model. The peak months of the mumps incidence were May to July every year, and the secondary peak months were November to January of the following year, during January 2004 to April 2014 in Hunan province. After the data sequence was handled by smooth sequence, model identification, establishment and diagnosis, the ARIMA(2,1,1) × (0,1,1)(12) was established, Box-Ljung Q test found, Q=8.40, P=0.868, the residual sequence was white noise, the established model to the data information extraction was complete, the model was reasonable. The R(2) value of the model fitting degree was 0.871, and the value of BIC was -1.646, while the average absolute error of the predicted value and the actual value was 0.025/100 000, the average relative error was 13.004%. The relative error of the model for the prediction of the mumps incidence in Hunan province was small, and the predicting results were reliable. Using the ARIMA(2,1,1) ×(0,1,1)(12) model to predict the mumps incidence from April 2016 to May 2015 in Hunan province, the peak months of the mumps incidence were May to July, and the secondary peak months were November to January of the following year, the incidence of the peak month was close to the same period. The ARIMA(2,1,1)×(0,1,1)(12) model is well fitted the trend of the mumps disease incidence in Hunan province, it has some practical value for the prevention and control of the disease.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Shetty, N; Løvendahl, P; Lund, M S; Buitenhuis, A J
2017-01-01
The present study explored the effectiveness of Fourier transform mid-infrared (FT-IR) spectral profiles as a predictor for dry matter intake (DMI) and residual feed intake (RFI). The partial least squares regression method was used to develop the prediction models. The models were validated using different external test sets, one randomly leaving out 20% of the records (validation A), the second randomly leaving out 20% of cows (validation B), and a third (for DMI prediction models) randomly leaving out one cow (validation C). The data included 1,044 records from 140 cows; 97 were Danish Holstein and 43 Danish Jersey. Results showed better accuracies for validation A compared with other validation methods. Milk yield (MY) contributed largely to DMI prediction; MY explained 59% of the variation and the validated model error root mean square error of prediction (RMSEP) was 2.24kg. The model was improved by adding live weight (LW) as an additional predictor trait, where the accuracy R 2 increased from 0.59 to 0.72 and error RMSEP decreased from 2.24 to 1.83kg. When only the milk FT-IR spectral profile was used in DMI prediction, a lower prediction ability was obtained, with R 2 =0.30 and RMSEP=2.91kg. However, once the spectral information was added, along with MY and LW as predictors, model accuracy improved and R 2 increased to 0.81 and RMSEP decreased to 1.49kg. Prediction accuracies of RFI changed throughout lactation. The RFI prediction model for the early-lactation stage was better compared with across lactation or mid- and late-lactation stages, with R 2 =0.46 and RMSEP=1.70. The most important spectral wavenumbers that contributed to DMI and RFI prediction models included fat, protein, and lactose peaks. Comparable prediction results were obtained when using infrared-predicted fat, protein, and lactose instead of full spectra, indicating that FT-IR spectral data do not add significant new information to improve DMI and RFI prediction models. Therefore, in practice, if full FT-IR spectral data are not stored, it is possible to achieve similar DMI or RFI prediction results based on standard milk control data. For DMI, the milk fat region was responsible for the major variation in milk spectra; for RFI, the major variation in milk spectra was within the milk protein region. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kollat, J. B.; Reed, P. M.
2009-12-01
This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem formulation provides a tremendous amount of information for decision makers. Specifically our many-objective analysis highlights the limitations and potentially negative design consequences of traditional single and two-objective problem formulations. These consequences become apparent through visual exploration of high-dimensional tradeoffs and the identification of regions with interesting compromise solutions. The prediction characteristics of these compromise designs are explored in detail, as well as their implications for subsequent design decisions in both space and time.
Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan
2016-11-15
Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2 = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pegion, K.; DelSole, T. M.; Becker, E.; Cicerone, T.
2016-12-01
Predictability represents the upper limit of prediction skill if we had an infinite member ensemble and a perfect model. It is an intrinsic limit of the climate system associated with the chaotic nature of the atmosphere. Producing a forecast system that can make predictions very near to this limit is the ultimate goal of forecast system development. Estimates of predictability together with calculations of current prediction skill are often used to define the gaps in our prediction capabilities on subseasonal to seasonal timescales and to inform the scientific issues that must be addressed to build the next forecast system. Quantification of the predictability is also important for providing a scientific basis for relaying to stakeholders what kind of climate information can be provided to inform decision-making and what kind of information is not possible given the intrinsic predictability of the climate system. One challenge with predictability estimates is that different prediction systems can give different estimates of the upper limit of skill. How do we know which estimate of predictability is most representative of the true predictability of the climate system? Previous studies have used the spread-error relationship and the autocorrelation to evaluate the fidelity of the signal and noise estimates. Using a multi-model ensemble prediction system, we can quantify whether these metrics accurately indicate an individual model's ability to properly estimate the signal, noise, and predictability. We use this information to identify the best estimates of predictability for 2-meter temperature, precipitation, and sea surface temperature from the North American Multi-model Ensemble and compare with current skill to indicate the regions with potential for improving skill.
Attention and prediction in human audition: a lesson from cognitive psychophysiology
Schröger, Erich; Marzecová, Anna; SanMiguel, Iria
2015-01-01
Attention is a hypothetical mechanism in the service of perception that facilitates the processing of relevant information and inhibits the processing of irrelevant information. Prediction is a hypothetical mechanism in the service of perception that considers prior information when interpreting the sensorial input. Although both (attention and prediction) aid perception, they are rarely considered together. Auditory attention typically yields enhanced brain activity, whereas auditory prediction often results in attenuated brain responses. However, when strongly predicted sounds are omitted, brain responses to silence resemble those elicited by sounds. Studies jointly investigating attention and prediction revealed that these different mechanisms may interact, e.g. attention may magnify the processing differences between predicted and unpredicted sounds. Following the predictive coding theory, we suggest that prediction relates to predictions sent down from predictive models housed in higher levels of the processing hierarchy to lower levels and attention refers to gain modulation of the prediction error signal sent up to the higher level. As predictions encode contents and confidence in the sensory data, and as gain can be modulated by the intention of the listener and by the predictability of the input, various possibilities for interactions between attention and prediction can be unfolded. From this perspective, the traditional distinction between bottom-up/exogenous and top-down/endogenous driven attention can be revisited and the classic concepts of attentional gain and attentional trace can be integrated. PMID:25728182
NASA Astrophysics Data System (ADS)
Norton, Andrew S.
An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.
Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.
McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey
2012-04-01
Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Ye, Min; Nagar, Swati; Korzekwa, Ken
2015-01-01
Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057
The Effect of Information Level on Human-Agent Interaction for Route Planning
2015-12-01
13 Fig. 4 Experiment 1 shows regression results for time spent at DP predicting posttest trust group membership for the high LOI...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust... group . Linear regression indicated that DT at DP was not a significant predictor of posttest trust for the Low or the Medium LOI conditions; however, it
Model parameter-related optimal perturbations and their contributions to El Niño prediction errors
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-04-01
Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.
Handling Trajectory Uncertainties for Airborne Conflict Management
NASA Technical Reports Server (NTRS)
Barhydt, Richard; Doble, Nathan A.; Karr, David; Palmer, Michael T.
2005-01-01
Airborne conflict management is an enabling capability for NASA's Distributed Air-Ground Traffic Management (DAG-TM) concept. DAGTM has the goal of significantly increasing capacity within the National Airspace System, while maintaining or improving safety. Under DAG-TM, autonomous aircraft maintain separation from each other and from managed aircraft unequipped for autonomous flight. NASA Langley Research Center has developed the Autonomous Operations Planner (AOP), an onboard decision support system that provides airborne conflict management (ACM) and strategic flight planning support for autonomous aircraft pilots. The AOP performs conflict detection, prevention, and resolution from nearby traffic aircraft and area hazards. Traffic trajectory information is assumed to be provided by Automatic Dependent Surveillance Broadcast (ADS-B). Reliable trajectory prediction is a key capability for providing effective ACM functions. Trajectory uncertainties due to environmental effects, differences in aircraft systems and performance, and unknown intent information lead to prediction errors that can adversely affect AOP performance. To accommodate these uncertainties, the AOP has been enhanced to create cross-track, vertical, and along-track buffers along the predicted trajectories of both ownship and traffic aircraft. These buffers will be structured based on prediction errors noted from previous simulations such as a recent Joint Experiment between NASA Ames and Langley Research Centers and from other outside studies. Currently defined ADS-B parameters related to navigation capability, trajectory type, and path conformance will be used to support the algorithms that generate the buffers.
NASA Astrophysics Data System (ADS)
Simmons, B. E.
1981-08-01
This report derives equations predicting satellite ephemeris error as a function of measurement errors of space-surveillance sensors. These equations lend themselves to rapid computation with modest computer resources. They are applicable over prediction times such that measurement errors, rather than uncertainties of atmospheric drag and of Earth shape, dominate in producing ephemeris error. This report describes the specialization of these equations underlying the ANSER computer program, SEEM (Satellite Ephemeris Error Model). The intent is that this report be of utility to users of SEEM for interpretive purposes, and to computer programmers who may need a mathematical point of departure for limited generalization of SEEM.
Prediction error induced motor contagions in human behaviors.
Ikegami, Tsuyoshi; Ganesh, Gowrishankar; Takeuchi, Tatsuya; Nakamoto, Hiroki
2018-05-29
Motor contagions refer to implicit effects on one's actions induced by observed actions. Motor contagions are believed to be induced simply by action observation and cause an observer's action to become similar to the action observed. In contrast, here we report a new motor contagion that is induced only when the observation is accompanied by prediction errors - differences between actions one observes and those he/she predicts or expects. In two experiments, one on whole-body baseball pitching and another on simple arm reaching, we show that the observation of the same action induces distinct motor contagions, depending on whether prediction errors are present or not. In the absence of prediction errors, as in previous reports, participants' actions changed to become similar to the observed action, while in the presence of prediction errors, their actions changed to diverge away from it, suggesting distinct effects of action observation and action prediction on human actions. © 2018, Ikegami et al.
Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.
2012-01-01
One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.
Bardeen, Matthew
2017-01-01
Water stress, which affects yield and wine quality, is often evaluated using the midday stem water potential (Ψstem). However, this measurement is acquired on a per plant basis and does not account for the assessment of vine water status spatial variability. The use of multispectral cameras mounted on unmanned aerial vehicle (UAV) is capable to capture the variability of vine water stress in a whole field scenario. It has been reported that conventional multispectral indices (CMI) that use information between 500–800 nm, do not accurately predict plant water status since they are not sensitive to water content. The objective of this study was to develop artificial neural network (ANN) models derived from multispectral images to predict the Ψstem spatial variability of a drip-irrigated Carménère vineyard in Talca, Maule Region, Chile. The coefficient of determination (R2) obtained between ANN outputs and ground-truth measurements of Ψstem were between 0.56–0.87, with the best performance observed for the model that included the bands 550, 570, 670, 700 and 800 nm. Validation analysis indicated that the ANN model could estimate Ψstem with a mean absolute error (MAE) of 0.1 MPa, root mean square error (RMSE) of 0.12 MPa, and relative error (RE) of −9.1%. For the validation of the CMI, the MAE, RMSE and RE values were between 0.26–0.27 MPa, 0.32–0.34 MPa and −24.2–25.6%, respectively. PMID:29084169
Poblete, Tomas; Ortega-Farías, Samuel; Moreno, Miguel Angel; Bardeen, Matthew
2017-10-30
Water stress, which affects yield and wine quality, is often evaluated using the midday stem water potential (Ψ stem ). However, this measurement is acquired on a per plant basis and does not account for the assessment of vine water status spatial variability. The use of multispectral cameras mounted on unmanned aerial vehicle (UAV) is capable to capture the variability of vine water stress in a whole field scenario. It has been reported that conventional multispectral indices (CMI) that use information between 500-800 nm, do not accurately predict plant water status since they are not sensitive to water content. The objective of this study was to develop artificial neural network (ANN) models derived from multispectral images to predict the Ψ stem spatial variability of a drip-irrigated Carménère vineyard in Talca, Maule Region, Chile. The coefficient of determination (R²) obtained between ANN outputs and ground-truth measurements of Ψ stem were between 0.56-0.87, with the best performance observed for the model that included the bands 550, 570, 670, 700 and 800 nm. Validation analysis indicated that the ANN model could estimate Ψ stem with a mean absolute error (MAE) of 0.1 MPa, root mean square error (RMSE) of 0.12 MPa, and relative error (RE) of -9.1%. For the validation of the CMI, the MAE, RMSE and RE values were between 0.26-0.27 MPa, 0.32-0.34 MPa and -24.2-25.6%, respectively.
Schematic knowledge changes what judgments of learning predict in a source memory task.
Konopka, Agnieszka E; Benjamin, Aaron S
2009-01-01
Source monitoring can be influenced by information that is external to the study context, such as beliefs and general knowledge (Johnson, Hashtroudi, & Lindsay, 1993). We investigated the extent to which metamnemonic judgments predict memory for items and sources when schematic information about the sources is or is not provided at encoding. Participants made judgments of learning (JOLs) to statements presented by two speakers and were informed of the occupation of each speaker either before or after the encoding session. Replicating earlier work, prior knowledge decreased participants' tendency to erroneously attribute statements to schematically consistent but episodically incorrect speakers. The origin of this effect can be understood by examining the relationship between JOLs and performance: JOLs were equally predictive of item and source memory in the absence of prior knowledge, but were exclusively predictive of source memory when participants knew of the relationship between speakers and statements during study. Background knowledge determines the information that people solicit in service of metamnemonic judgments, suggesting that these judgments reflect control processes during encoding that reduce schematic errors.
NASA Astrophysics Data System (ADS)
Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan
2017-06-01
Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.
Predictability of CFSv2 in the tropical Indo-Pacific region, at daily and subseasonal time scales
NASA Astrophysics Data System (ADS)
Krishnamurthy, V.
2018-06-01
The predictability of a coupled climate model is evaluated at daily and intraseasonal time scales in the tropical Indo-Pacific region during boreal summer and winter. This study has assessed the daily retrospective forecasts of the Climate Forecast System version 2 from the National Centers of Environmental Prediction for the period 1982-2010. The growth of errors in the forecasts of daily precipitation, monsoon intraseasonal oscillation (MISO) and the Madden-Julian oscillation (MJO) is studied. The seasonal cycle of the daily climatology of precipitation is reasonably well predicted except for the underestimation during the peak of summer. The anomalies follow the typical pattern of error growth in nonlinear systems and show no difference between summer and winter. The initial errors in all the cases are found to be in the nonlinear phase of the error growth. The doubling time of small errors is estimated by applying Lorenz error formula. For summer and winter, the doubling time of the forecast errors is in the range of 4-7 and 5-14 days while the doubling time of the predictability errors is 6-8 and 8-14 days, respectively. The doubling time in MISO during the summer and MJO during the winter is in the range of 12-14 days, indicating higher predictability and providing optimism for long-range prediction. There is no significant difference in the growth of forecasts errors originating from different phases of MISO and MJO, although the prediction of the active phase seems to be slightly better.
A Physical Validation Program for the GPM Mission
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2003-01-01
The GPM mission is currently planned for start in the late 2007 - early 2008 time frame. Its main scientific goal is to help answer pressing scientific problems arising within the context of global and regional water cycling. These problems cut across a hierarchy of scales and include climate-water cycle interactions, techniques for improving weather and climate predictions, and better methods for combining observed precipitation with hydrometeorological prediction models for applications to hazardous flood-producing storms, seasonal flood draught conditions, and fresh water resource assessments. The GPM mission will expand the scope of precipitation measurement through the use of a constellation of some 9 satellites, one of which will be an advanced TRMM-like core satellite carrying a dual-frequency Ku-Ka band precipitation radar and an advanced, multifrequency passive microwave radiometer with vertical-horizontal polarization discrimination. The other constellation members will include new dedicated satellites and co-existing operational/research satellites carrying similar (but not identical) passive microwave radiometers. The goal of the constellation is to achieve approximately 3-hour sampling at any spot on the globe -- continuously. The constellation's orbit architecture will consist of a mix of sun-synchronous and non-sun-synchronous satellites with the core satellite providing measurements of cloud-precipitation microphysical processes plus calibration-quality rainrate retrievals to be used with the other retrieval information to ensure bias-free constellation coverage. A major requirement before the retrieved rainfall information generated by the GPM mission can be used effectively by prognostic models to improve weather forecasts, hydrometeorological forecasts, and climate model reanalysis simulations is a capability to quantify the error characteristics of the retrievals. A solution for this problem has been upheld in past precipitation missions because of the lack of suitable error modeling systems incorporated into the validation programs and data distribution systems. An overview of how NASA intends to overcome this problem for the GPM mission using a physically-based error modeling approach within a multi-faceted validation program is described. The solution is to first identify specific user requirements and then determine the most stringent of these requirements that embodies all essential error characterization information needed by the entire user community. In the context of NASA s scientific agenda for the GPM mission, the most stringent user requirement is found within the data assimilation community. The fundamental theory of data assimilation vis-a-vis ingesting satellite precipitation information into the pre-forecast initializations is based on quantifying the conditional bias and precision errors of individual rain retrievals, and the space-time structure of the precision error (i.e., the spatial-temporal error covariance). By generating the hardware and software capability to produce this information in a near real-time fashion, and to couple the derived quantitative error properties to the actual retrieved rainrates, all key validation users can be satisfied. The talk will describe the essential components of the hardware and software systems needed to generate such near real-time error properties, as well as the various paradigm shifts needed within the validation community to produce a validation program relevant to the precipitation user community.
NASA Astrophysics Data System (ADS)
Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.
2015-01-01
Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2014-11-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
When is an error not a prediction error? An electrophysiological investigation.
Holroyd, Clay B; Krigolson, Olave E; Baker, Robert; Lee, Seung; Gibson, Jessica
2009-03-01
A recent theory holds that the anterior cingulate cortex (ACC) uses reinforcement learning signals conveyed by the midbrain dopamine system to facilitate flexible action selection. According to this position, the impact of reward prediction error signals on ACC modulates the amplitude of a component of the event-related brain potential called the error-related negativity (ERN). The theory predicts that ERN amplitude is monotonically related to the expectedness of the event: It is larger for unexpected outcomes than for expected outcomes. However, a recent failure to confirm this prediction has called the theory into question. In the present article, we investigated this discrepancy in three trial-and-error learning experiments. All three experiments provided support for the theory, but the effect sizes were largest when an optimal response strategy could actually be learned. This observation suggests that ACC utilizes dopamine reward prediction error signals for adaptive decision making when the optimal behavior is, in fact, learnable.
Dissociable effects of surprising rewards on learning and memory.
Rouhani, Nina; Norman, Kenneth A; Niv, Yael
2018-03-19
Reward-prediction errors track the extent to which rewards deviate from expectations, and aid in learning. How do such errors in prediction interact with memory for the rewarding episode? Existing findings point to both cooperative and competitive interactions between learning and memory mechanisms. Here, we investigated whether learning about rewards in a high-risk context, with frequent, large prediction errors, would give rise to higher fidelity memory traces for rewarding events than learning in a low-risk context. Experiment 1 showed that recognition was better for items associated with larger absolute prediction errors during reward learning. Larger prediction errors also led to higher rates of learning about rewards. Interestingly we did not find a relationship between learning rate for reward and recognition-memory accuracy for items, suggesting that these two effects of prediction errors were caused by separate underlying mechanisms. In Experiment 2, we replicated these results with a longer task that posed stronger memory demands and allowed for more learning. We also showed improved source and sequence memory for items within the high-risk context. In Experiment 3, we controlled for the difficulty of reward learning in the risk environments, again replicating the previous results. Moreover, this control revealed that the high-risk context enhanced item-recognition memory beyond the effect of prediction errors. In summary, our results show that prediction errors boost both episodic item memory and incremental reward learning, but the two effects are likely mediated by distinct underlying systems. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Protein and oil composition predictions of single soybeans by transmission Raman spectroscopy.
Schulmerich, Matthew V; Walsh, Michael J; Gelber, Matthew K; Kong, Rong; Kole, Matthew R; Harrison, Sandra K; McKinney, John; Thompson, Dennis; Kull, Linda S; Bhargava, Rohit
2012-08-22
The soybean industry requires rapid, accurate, and precise technologies for the analyses of seed/grain constituents. While the current gold standard for nondestructive quantification of economically and nutritionally important soybean components is near-infrared spectroscopy (NIRS), emerging technology may provide viable alternatives and lead to next generation instrumentation for grain compositional analysis. In principle, Raman spectroscopy provides the necessary chemical information to generate models for predicting the concentration of soybean constituents. In this communication, we explore the use of transmission Raman spectroscopy (TRS) for nondestructive soybean measurements. We show that TRS uses the light scattering properties of soybeans to effectively homogenize the heterogeneous bulk of a soybean for representative sampling. Working with over 1000 individual intact soybean seeds, we developed a simple partial least-squares model for predicting oil and protein content nondestructively. We find TRS to have a root-mean-standard error of prediction (RMSEP) of 0.89% for oil measurements and 0.92% for protein measurements. In both calibration and validation sets, the predicative capabilities of the model were similar to the error in the reference methods.
Learning to Predict Consequences as a Method of Knowledge Transfer in Reinforcement Learning.
Chalmers, Eric; Contreras, Edgar Bermudez; Robertson, Brandon; Luczak, Artur; Gruber, Aaron
2017-04-17
The reinforcement learning (RL) paradigm allows agents to solve tasks through trial-and-error learning. To be capable of efficient, long-term learning, RL agents should be able to apply knowledge gained in the past to new tasks they may encounter in the future. The ability to predict actions' consequences may facilitate such knowledge transfer. We consider here domains where an RL agent has access to two kinds of information: agent-centric information with constant semantics across tasks, and environment-centric information, which is necessary to solve the task, but with semantics that differ between tasks. For example, in robot navigation, environment-centric information may include the robot's geographic location, while agent-centric information may include sensor readings of various nearby obstacles. We propose that these situations provide an opportunity for a very natural style of knowledge transfer, in which the agent learns to predict actions' environmental consequences using agent-centric information. These predictions contain important information about the affordances and dangers present in a novel environment, and can effectively transfer knowledge from agent-centric to environment-centric learning systems. Using several example problems including spatial navigation and network routing, we show that our knowledge transfer approach can allow faster and lower cost learning than existing alternatives.
Kahmann, A; Anzanello, M J; Fogliatto, F S; Marcelo, M C A; Ferrão, M F; Ortiz, R S; Mariotti, K C
2018-04-15
Street cocaine is typically altered with several compounds that increase its harmful health-related side effects, most notably depression, convulsions, and severe damages to the cardiovascular system, lungs, and brain. Thus, determining the concentration of cocaine and adulterants in seized drug samples is important from both health and forensic perspectives. Although FTIR has been widely used to identify the fingerprint and concentration of chemical compounds, spectroscopy datasets are usually comprised of thousands of highly correlated wavenumbers which, when used as predictors in regression models, tend to undermine the predictive performance of multivariate techniques. In this paper, we propose an FTIR wavenumber selection method aimed at identifying FTIR spectra intervals that best predict the concentration of cocaine and adulterants (e.g. caffeine, phenacetin, levamisole, and lidocaine) in cocaine samples. For that matter, the Mutual Information measure is integrated into a Quadratic Programming problem with the objective of minimizing the probability of retaining redundant wavenumbers, while maximizing the relationship between retained wavenumbers and compounds' concentrations. Optimization outputs guide the order of inclusion of wavenumbers in a predictive model, using a forward-based wavenumber selection method. After the inclusion of each wavenumber, parameters of three alternative regression models are estimated, and each model's prediction error is assessed through the Mean Average Error (MAE) measure; the recommended subset of retained wavenumbers is the one that minimizes the prediction error with maximum parsimony. Using our propositions in a dataset of 115 cocaine samples we obtained a best prediction model with average MAE of 0.0502 while retaining only 2.29% of the original wavenumbers, increasing the predictive precision by 0.0359 when compared to a model using the complete set of wavenumbers as predictors. Copyright © 2018 Elsevier B.V. All rights reserved.
Mainsah, B O; Reeves, G; Collins, L M; Throckmorton, C S
2017-08-01
The role of a brain-computer interface (BCI) is to discern a user's intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.
NASA Astrophysics Data System (ADS)
Mainsah, B. O.; Reeves, G.; Collins, L. M.; Throckmorton, C. S.
2017-08-01
Objective. The role of a brain-computer interface (BCI) is to discern a user’s intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. Approach. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. Main results. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. Significance. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.
NASA Astrophysics Data System (ADS)
Slater, L. J.; Villarini, G.; Bradley, A.
2015-12-01
Model predictions of precipitation and temperature are crucial to mitigate the impacts of major flood and drought events through informed planning and response. However, the potential value and applicability of these predictions is inescapably linked to their forecast quality. The North-American Multi-Model Ensemble (NMME) is a multi-agency supported forecasting system for intraseasonal to interannual (ISI) climate predictions. Retrospective forecasts and real-time information are provided by each agency free of charge to facilitate collaborative research efforts for predicting future climate conditions as well as extreme weather events such as floods and droughts. Using the PRISM climate mapping system as the reference data, we examine the skill of five General Circulation Models (GCMs) from the NMME project to forecast monthly and seasonal precipitation and temperature over seven sub-regions of the continental United States. For each model, we quantify the seasonal accuracy of the forecast relative to observed precipitation using the mean square error skill score. This score is decomposed to assess the accuracy of the forecast in the absence of biases (potential skill), and in the presence of conditional (slope reliability) and unconditional (standardized mean error) biases. The quantification of these biases allows us to diagnose each model's skill over a full range temporal and spatial scales. Finally, we test each model's forecasting skill by evaluating its ability to predict extended periods of extreme temperature and precipitation that were conducive to 'billion-dollar' historical flood and drought events in different regions of the continental USA. The forecasting skill of the individual climate models is summarized and presented along with a discussion of different multi-model averaging techniques for predicting such events.
NASA Astrophysics Data System (ADS)
Pichardo, Samuel; Moreno-Hernández, Carlos; Drainville, Robert Andrew; Sin, Vivian; Curiel, Laura; Hynynen, Kullervo
2017-09-01
A better understanding of ultrasound transmission through the human skull is fundamental to develop optimal imaging and therapeutic applications. In this study, we present global attenuation values and functions that correlate apparent density calculated from computed tomography scans to shear speed of sound. For this purpose, we used a model for sound propagation based on the viscoelastic wave equation (VWE) assuming isotropic conditions. The model was validated using a series of measurements with plates of different plastic materials and angles of incidence of 0°, 15° and 50°. The optimal functions for transcranial ultrasound propagation were established using the VWE, scan measurements of transcranial propagation with an angle of incidence of 40° and a genetic optimization algorithm. Ten (10) locations over three (3) skulls were used for ultrasound frequencies of 270 kHz and 836 kHz. Results with plastic materials demonstrated that the viscoelastic modeling predicted both longitudinal and shear propagation with an average (±s.d.) error of 9(±7)% of the wavelength in the predicted delay and an error of 6.7(±5)% in the estimation of transmitted power. Using the new optimal functions of speed of sound and global attenuation for the human skull, the proposed model predicted the transcranial ultrasound transmission for a frequency of 270 kHz with an expected error in the predicted delay of 5(±2.7)% of the wavelength. The sound propagation model predicted accurately the sound propagation regardless of either shear or longitudinal sound transmission dominated. For 836 kHz, the model predicted accurately in average with an error in the predicted delay of 17(±16)% of the wavelength. Results indicated the importance of the specificity of the information at a voxel level to better understand ultrasound transmission through the skull. These results and new model will be very valuable tools for the future development of transcranial applications of ultrasound therapy and imaging.
Ye, Min; Nagar, Swati; Korzekwa, Ken
2016-04-01
Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optimal dental age estimation practice in United Arab Emirates' children.
Altalie, Salem; Thevissen, Patrick; Fieuws, Steffen; Willems, Guy
2014-03-01
The aim of the study was to detect whether the Willems model, developed on a Belgian reference sample, can be used for age estimations in United Arab Emirates (UAE) children. Furthermore, it was verified that if added third molars development information in children provided more accurate age predictions. On 1900 panoramic radiographs, the development of left mandibular permanent teeth (PT) and third molars (TM) was registered according the Demirjian and the Kohler technique, respectively. The PT data were used to verify the Willems model and to develop a UAE model and to verify it. Multiple regression models with PT, TM, and PT + TM scores as independent and age as dependent factor were developed. Comparing the verified Willems- and the UAE model revealed differences in mean error of -0.01 year, mean absolute error of 0.01 year and root mean squared error of 0.90 year. Neglectable overall decrease in RMSE was detected combining PM and TM developmental information. © 2013 American Academy of Forensic Sciences.
Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun; Seong, Gong Je
2017-03-01
To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R²=0.404). Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors.
Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun
2017-01-01
Purpose To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. Materials and Methods This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. Results In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R2=0.404). Conclusion Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors. PMID:28120576
Linear reduction method for predictive and informative tag SNP selection.
He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander
2005-01-01
Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.
Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model
NASA Astrophysics Data System (ADS)
Tang, Jingshi; Liu, Lin; Miao, Manqian
Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.
Development of an accident duration prediction model on the Korean Freeway Systems.
Chung, Younshik
2010-01-01
Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Comprehensive database of diameter-based biomass regressions for North American tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2004-01-01
A database consisting of 2,640 equations compiled from the literature for predicting the biomass of trees and tree components from diameter measurements of species found in North America. Bibliographic information, geographic locations, diameter limits, diameter and biomass units, equation forms, statistical errors, and coefficients are provided for each equation,...
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Model-free and model-based reward prediction errors in EEG.
Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy
2018-05-24
Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.
QSAR modeling for predicting mutagenic toxicity of diverse chemicals for regulatory purposes.
Basant, Nikita; Gupta, Shikha
2017-06-01
The safety assessment process of chemicals requires information on their mutagenic potential. The experimental determination of mutagenicity of a large number of chemicals is tedious and time and cost intensive, thus compelling for alternative methods. We have established local and global QSAR models for discriminating low and high mutagenic compounds and predicting their mutagenic activity in a quantitative manner in Salmonella typhimurium (TA) bacterial strains (TA98 and TA100). The decision treeboost (DTB)-based classification QSAR models discriminated among two categories with accuracies of >96% and the regression QSAR models precisely predicted the mutagenic activity of diverse chemicals yielding high correlations (R 2 ) between the experimental and model-predicted values in the respective training (>0.96) and test (>0.94) sets. The test set root mean squared error (RMSE) and mean absolute error (MAE) values emphasized the usefulness of the developed models for predicting new compounds. Relevant structural features of diverse chemicals that were responsible and influence the mutagenic activity were identified. The applicability domains of the developed models were defined. The developed models can be used as tools for screening new chemicals for their mutagenicity assessment for regulatory purpose.
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Uncertainty information in climate data records from Earth observation
NASA Astrophysics Data System (ADS)
Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang
2017-07-01
The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.
Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram
2017-01-01
The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Unraveling the unknown areas of the human metabolome: the role of infrared ion spectroscopy.
Martens, Jonathan; Berden, Giel; Bentlage, Herman; Coene, Karlien L M; Engelke, Udo F; Wishart, David; van Scherpenzeel, Monique; Kluijtmans, Leo A J; Wevers, Ron A; Oomens, Jos
2018-05-01
The identification of molecular biomarkers is critical for diagnosing and treating patients and for establishing a fundamental understanding of the pathophysiology and underlying biochemistry of inborn errors of metabolism. Currently, liquid chromatography/high-resolution mass spectrometry and nuclear magnetic resonance spectroscopy are the principle methods used for biomarker research and for structural elucidation of small molecules in patient body fluids. While both are powerful techniques, several limitations exist that often make the identification of unknown compounds challenging. Here, we describe how infrared ion spectroscopy has the potential to be a valuable orthogonal technique that provides highly-specific molecular structure information while maintaining ultra-high sensitivity. Here, we characterize and distinguish two well-known biomarkers of inborn errors of metabolism, glutaric acid for glutaric aciduria and ethylmalonic acid for short-chain acyl-CoA dehydrogenase deficiency, using infrared ion spectroscopy. In contrast to tandem mass spectra, in which ion fragments can hardly be predicted, we show that the prediction of an IR spectrum allows reference-free identification in the case that standard compounds are either commercially or synthetically unavailable. Finally, we illustrate how functional group information can be obtained from an IR spectrum for an unknown and how this is valuable information to, for example, narrow down a list of candidate structures resulting from a database query. Early diagnosis in inborn errors of metabolism is crucial for enabling treatment and depends on the identification of biomarkers specific for the disorder. Infrared ion spectroscopy has the potential to play a pivotal role in the identification of challenging biomarkers.
The Effect of Information Level on Human-Agent Interaction for Route Planning
2015-12-01
χ2 (4, 60) = 11.41, p = 0.022, and Cramer’s V = 0.308, indicating there was no effect of experiment on posttest trust. Pretest trust was not a...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust...0.007, Cramer’s V = 0.344, indicating there was no effect of experiment on posttest trust. Pretest trust was not a significant prediction of total DT
NASA Astrophysics Data System (ADS)
Judt, Falko
2017-04-01
A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedam, S.; Docef, A.; Fix, M.
2005-06-15
The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effectsmore » of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.« less
NASA Astrophysics Data System (ADS)
Baker, N. L.; Langland, R.
2016-12-01
Variations in Earth rotation are measured by comparing a time based on Earth's variable rotation rate about its axis to a time standard based on an internationally coordinated ensemble of atomic clocks that provide a uniform time scale. The variability of Earth's rotation is partly due to the changes in angular momentum that occur in the atmosphere and ocean as weather patterns and ocean features develop, propagate, and dissipate. The NAVGEM Effective Atmospheric Angular Momentum Functions (EAAMF) and their predictions are computed following Barnes et al. (1983), and provided to the U.S. Naval Observatory daily. These along with similar data from the NOAA GFS model are used to calculate and predict the Earth orientation parameters (Stamatakos et al., 2016). The Navy's high-resolution global weather prediction system consists of the Navy Global Environmental Model (NAVGEM; Hogan et al., 2014) and a hybrid four-dimensional variational data assimilation system (4DVar) (Kuhl et al., 2013). An important component of NAVGEM is the Forecast Sensitivity Observation Impact (FSOI). FSOI is a mathematical method to quantify the contribution of individual observations or sets of observations to the reduction in the 24-hr forecast error (Langland and Baker, 2004). The FSOI allows for dynamic monitoring of the relative quality and value of the observations assimilated by NAVGEM, and the relative ability of the data assimilation system to effectively use the observation information to generate an improved forecast. For this study, along with the FSOI based on the global moist energy error norm, we computed the FSOI using an error norm based on the Effective Angular Momentum Functions. This modification allowed us to assess which observations were most beneficial in reducing the 24-hr forecast error for the atmospheric angular momentum.
Nonspinning numerical relativity waveform surrogates: assessing the model
NASA Astrophysics Data System (ADS)
Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel
2015-04-01
Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.
Prediction-error in the context of real social relationships modulates reward system activity.
Poore, Joshua C; Pfeifer, Jennifer H; Berkman, Elliot T; Inagaki, Tristen K; Welborn, Benjamin L; Lieberman, Matthew D
2012-01-01
The human reward system is sensitive to both social (e.g., validation) and non-social rewards (e.g., money) and is likely integral for relationship development and reputation building. However, data is sparse on the question of whether implicit social reward processing meaningfully contributes to explicit social representations such as trust and attachment security in pre-existing relationships. This event-related fMRI experiment examined reward system prediction-error activity in response to a potent social reward-social validation-and this activity's relation to both attachment security and trust in the context of real romantic relationships. During the experiment, participants' expectations for their romantic partners' positive regard of them were confirmed (validated) or violated, in either positive or negative directions. Primary analyses were conducted using predefined regions of interest, the locations of which were taken from previously published research. Results indicate that activity for mid-brain and striatal reward system regions of interest was modulated by social reward expectation violation in ways consistent with prior research on reward prediction-error. Additionally, activity in the striatum during viewing of disconfirmatory information was associated with both increases in post-scan reports of attachment anxiety and decreases in post-scan trust, a finding that follows directly from representational models of attachment and trust.
Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons. PMID:24788812
D’Astolfo, Lisa; Rief, Winfried
2017-01-01
Modifying patients’ expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome) is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients’ expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli). Two methods are often used to investigate the PE: (1) paradigms, in which participants passively observe PEs (”passive” paradigms) and (2) paradigms, which encourage a behavioral adaptation following a PE (“active” paradigms). These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1) the confrontation with an expectation violation situation and (2) an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed to directly assess expectations of participants, this study provides new insights into the information processing mechanisms following an expectation violation. PMID:28804467
D'Astolfo, Lisa; Rief, Winfried
2017-01-01
Modifying patients' expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome) is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients' expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli). Two methods are often used to investigate the PE: (1) paradigms, in which participants passively observe PEs ("passive" paradigms) and (2) paradigms, which encourage a behavioral adaptation following a PE ("active" paradigms). These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1) the confrontation with an expectation violation situation and (2) an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed to directly assess expectations of participants, this study provides new insights into the information processing mechanisms following an expectation violation.
NASA Astrophysics Data System (ADS)
Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick
2017-09-01
Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.
NASA Astrophysics Data System (ADS)
Ma, Yuanxu; Huang, He Qing
2016-07-01
Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.
Sohn, Jae Ho; Duran, Rafael; Zhao, Yan; Fleckenstein, Florian; Chapiro, Julius; Sahu, Sonia P.; Schernthaner, Rüdiger E.; Qian, Tianchen; Lee, Howard; Zhao, Li; Hamilton, James; Frangakis, Constantine; Lin, MingDe; Salem, Riad; Geschwind, Jean-Francois
2018-01-01
Background & Aims There is debate over the best way to stage hepatocellular carcinoma (HCC). We attempted to validate the prognostic and clinical utility of the recently developed Hong Kong Liver Cancer (HKLC) staging system, a hepatitis B-based model, and compared data with that from the Barcelona Clinic Liver Cancer (BCLC) staging system in a North American population who underwent intra-arterial therapy (IAT). Methods We performed a retrospective analysis of data from 1009 patients with HCC who underwent intra-arterial therapy from 2000 through 2014. Most patients had hepatitis C or unresectable tumors; all patients underwent IAT, with or without resection, transplantation, and/or systemic chemotherapy. We calculated HCC stage for each patient using 5-stage HKLC (HKLC-5) and 9-stage HKLC (HKLC-9) system classifications, as well as the BCLC system. Survival information was collected up until end of 2014 at which point living or unconfirmed patients were censored. We compared performance of the BCLC, HKLC-5, and HKLC-9 systems in predicting patient outcomes using Kaplan-Meier estimates, calibration plots, c-statistic, Akaike information criterion, and the likelihood ratio test. Results Median overall survival time, calculated from first IAT until date of death or censorship, for the entire cohort (all stages) was 9.8 months. The BCLC and HKLC staging systems predicted patient survival times with significance (P<.001). HKLC-5 and HKLC-9 each demonstrated good calibration. The HKLC-5 system outperformed the BCLC system in predicting patient survival times (HKLC c=0.71, Akaike information criterion=6242; BCLC c=0.64, Akaike information criterion=6320), reducing error in predicting survival time (HKLC reduced error by 14%, BCLC reduced error by 12%), and homogeneity (HKLC χ2=201; P<.001; BCLC χ2=119; P<.001) and monotonicity (HKLC linear trend χ2=193; P<.001; BCLC linear trend χ2=111; P<.001). Small proportions of patients with HCC of stages IV or V, according to the HKLC system, survived for 6 months and 4 months, respectively. Conclusion In a retrospective analysis of patients who underwent IAT for unresectable HCC, we found the HKLC-5 staging system to have the best combination of performances in survival separation, calibration, and discrimination; it consistently outperformed the BCLC system in predicting survival times of patients. The HKLC system identified patients with HCC of stages IV and V who are unlikely to benefit from IAT. PMID:27847278
Sohn, Jae Ho; Duran, Rafael; Zhao, Yan; Fleckenstein, Florian; Chapiro, Julius; Sahu, Sonia; Schernthaner, Rüdiger E; Qian, Tianchen; Lee, Howard; Zhao, Li; Hamilton, James; Frangakis, Constantine; Lin, MingDe; Salem, Riad; Geschwind, Jean-Francois
2017-05-01
There is debate over the best way to stage hepatocellular carcinoma (HCC). We attempted to validate the prognostic and clinical utility of the recently developed Hong Kong Liver Cancer (HKLC) staging system, a hepatitis B-based model, and compared data with that from the Barcelona Clinic Liver Cancer (BCLC) staging system in a North American population that underwent intra-arterial therapy (IAT). We performed a retrospective analysis of data from 1009 patients with HCC who underwent IAT from 2000 through 2014. Most patients had hepatitis C or unresectable tumors; all patients underwent IAT, with or without resection, transplantation, and/or systemic chemotherapy. We calculated HCC stage for each patient using 5-stage HKLC (HKLC-5) and 9-stage HKLC (HKLC-9) system classifications, and the BCLC system. Survival information was collected up until the end of 2014 at which point living or unconfirmed patients were censored. We compared performance of the BCLC, HKLC-5, and HKLC-9 systems in predicting patient outcomes using Kaplan-Meier estimates, calibration plots, C statistic, Akaike information criterion, and the likelihood ratio test. Median overall survival time, calculated from first IAT until date of death or censorship, for the entire cohort (all stages) was 9.8 months. The BCLC and HKLC staging systems predicted patient survival times with significance (P < .001). HKLC-5 and HKLC-9 each demonstrated good calibration. The HKLC-5 system outperformed the BCLC system in predicting patient survival times (HKLC C = 0.71, Akaike information criterion = 6242; BCLC C = 0.64, Akaike information criterion = 6320), reducing error in predicting survival time (HKLC reduced error by 14%, BCLC reduced error by 12%), and homogeneity (HKLC chi-square = 201, P < .001; BCLC chi-square = 119, P < .001) and monotonicity (HKLC linear trend chi-square = 193, P < .001; BCLC linear trend chi-square = 111, P < .001). Small proportions of patients with HCC of stages IV or V, according to the HKLC system, survived for 6 months and 4 months, respectively. In a retrospective analysis of patients who underwent IAT for unresectable HCC, we found the HKLC-5 staging system to have the best combination of performances in survival separation, calibration, and discrimination; it consistently outperformed the BCLC system in predicting survival times of patients. The HKLC system identified patients with HCC of stages IV and V who are unlikely to benefit from IAT. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
Kalman filtered MR temperature imaging for laser induced thermal therapies.
Fuentes, D; Yung, J; Hazle, J D; Weinberg, J S; Stafford, R J
2012-04-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3-D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L(2) (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, ∆t < 10 s, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss ∆t > 10 sec.
Tamburini, Elena; Tagliati, Chiara; Bonato, Tiziano; Costa, Stefania; Scapoli, Chiara; Pedrini, Paola
2016-01-01
Near-infrared spectroscopy (NIRS) has been widely used for quantitative and/or qualitative determination of a wide range of matrices. The objective of this study was to develop a NIRS method for the quantitative determination of fluorine content in polylactide (PLA)-talc blends. A blending profile was obtained by mixing different amounts of PLA granules and talc powder. The calibration model was built correlating wet chemical data (alkali digestion method) and NIR spectra. Using FT (Fourier Transform)-NIR technique, a Partial Least Squares (PLS) regression model was set-up, in a concentration interval of 0 ppm of pure PLA to 800 ppm of pure talc. Fluorine content prediction (R2cal = 0.9498; standard error of calibration, SEC = 34.77; standard error of cross-validation, SECV = 46.94) was then externally validated by means of a further 15 independent samples (R2EX.V = 0.8955; root mean standard error of prediction, RMSEP = 61.08). A positive relationship between an inorganic component as fluorine and NIR signal has been evidenced, and used to obtain quantitative analytical information from the spectra. PMID:27490548
NASA Technical Reports Server (NTRS)
Doggett, Leroy E.; Schaefer, Bradley E.
1994-01-01
We report the results of five Moonwatches, in which more than 2000 observers throughout North America attempted to sight the thin lunar crescent. For each Moonwatch we were able to determine the position of the Lunar Date Line (LDL), the line along which a normal observer has a 50% probability of spotting the Moon. The observational LDLs were then compared with predicted LDLs derived from crescent visibility prediction algorithms. We find that ancient and medieval rules are higly unreliable. More recent empirical criteria, based on the relative altitude and azimuth of the Moon at the time of sunset, have a reasonable accuracy, with the best specific formulation being due to Yallop. The modern theoretical model by Schaefer (based on the physiology of the human eye and the local observing conditions) is found to have the least systematic error, the least average error, and the least maximum error of all models tested. Analysis of the observations also provided information about atmospheric, optical and human factors that affect the observations. We show that observational lunar calendars have a natural bias to begin early.
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount ofmore » uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST for a 1.5 MW turbine. The impact of lidar turbulence error on the predicted power from these different models is examined to determine the degree of turbulence measurement accuracy needed for accurate power prediction.« less
Liu, Wei; Du, Peijun; Wang, Dongchen
2015-01-01
One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.
Error disclosure: a new domain for safety culture assessment.
Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J
2012-07-01
To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Are phonological influences on lexical (mis)selection the result of a monitoring bias?
Ratinckx, Elie; Ferreira, Victor S.; Hartsuiker, Robert J.
2009-01-01
A monitoring bias account is often used to explain speech error patterns that seem to be the result of an interactive language production system, like phonological influences on lexical selection errors. A biased monitor is suggested to detect and covertly correct certain errors more often than others. For instance, this account predicts that errors which are phonologically similar to intended words are harder to detect than ones that are phonologically dissimilar. To test this, we tried to elicit phonological errors under the same conditions that show other kinds of lexical selection errors. In five experiments, we presented participants with high cloze probability sentence fragments followed by a picture that was either semantically related, a homophone of a semantically related word, or phonologically related to the (implicit) last word of the sentence. All experiments elicited semantic completions or homophones of semantic completions, but none elicited phonological completions. This finding is hard to reconcile with a monitoring bias account and is better explained with an interactive production system. Additionally, this finding constrains the amount of bottom-up information flow in interactive models. PMID:18942035
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Multi-layer Retrievals of Greenhouse Gases from a Combined Use of GOSAT TANSO-FTS SWIR and TIR
NASA Astrophysics Data System (ADS)
Kikuchi, N.; Kuze, A.; Kataoka, F.; Shiomi, K.; Hashimoto, M.; Suto, H.; Knuteson, R. O.; Iraci, L. T.; Yates, E. L.; Gore, W.; Tanaka, T.; Yokota, T.
2016-12-01
The TANSO-FTS sensor onboard GOSAT has three frequency bands in the shortwave infrared (SWIR) and the fourth band in the thermal infrared (TIR). Observations of high-resolution spectra of reflected sunlight in the SWIR are extensively utilized to retrieve column-averaged concentrations of the major greenhouse gases such as carbon dioxide (XCO2) and methane (XCH4). Although global XCO2 and XCH4 distribution retrieved from SWIR data can reduce the uncertainty in the current knowledge about sources and sinks of these gases, information on the vertical profiles would be more useful to constrain the surface flux and also to identify the local emission sources. Based on the degrees of freedom for signal, Kulawik et al. (2016, IWGGMS-12 presentation) shows that 2-layer information on the concentration of CO2 can be extracted from TANSO-FTS SWIR measurements, and the retrieval error is predicted to be about 5 ppm in the lower troposphere. In this study, we present multi-layer retrievals of CO2 and CH4 from a combined use of measurements of TANSO-FTS SWIR and TIR. We selected GOSAT observations at Railroad Valley Playa in Nevada, USA, which is a vicarious calibration site for TANSO-FTS, as we have various ancillary data including atmospheric temperature and humidity taken by a radiosonde, surface temperature, and surface emissivity with a ground based FTS. All of these data are useful especially for retrievals using TIR spectra. Currently, we use the 700-800 cm-1 and 1200-1300 cm-1 TIR windows for CO2 and CH4 retrievals, respectively, in addition to the SWIR bands. We found that by adding TIR windows, 3-layer information can be extracted, and the predicted retrieval error in the CO2 concentration was reduced about 1 ppm in the lower troposphere. We expect that the retrieval error could be further reduced by optimizing TIR windows and by reducing systematic forward model errors.
NASA Astrophysics Data System (ADS)
Chen, K.; Y Zhang, T.; Zhang, F.; Zhang, Z. R.
2017-12-01
Grey system theory regards uncertain system in which information is known partly and unknown partly as research object, extracts useful information from part known, and thereby revealing the potential variation rule of the system. In order to research the applicability of data-driven modelling method in melting peak temperature (T m) fitting and prediction of polypropylene (PP) during ultraviolet radiation aging, the T m of homo-polypropylene after different ultraviolet radiation exposure time investigated by differential scanning calorimeter was fitted and predicted by grey GM(1, 1) model based on grey system theory. The results show that the T m of PP declines with the prolong of aging time, and fitting and prediction equation obtained by grey GM(1, 1) model is T m = 166.567472exp(-0.00012t). Fitting effect of the above equation is excellent and the maximum relative error between prediction value and actual value of T m is 0.32%. Grey system theory needs less original data, has high prediction accuracy, and can be used to predict aging behaviour of PP.
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
An introduction to high speed aircraft noise prediction
NASA Technical Reports Server (NTRS)
Wilson, Mark R.
1992-01-01
The Aircraft Noise Prediction Program's High Speed Research prediction system (ANOPP-HSR) is introduced. This mini-manual is an introduction which gives a brief overview of the ANOPP system and the components of the HSR prediction method. ANOPP information resources are given. Twelve of the most common ANOPP-HSR control statements are described. Each control statement's purpose and format are stated and relevant examples are provided. More detailed examples of the use of the control statements are presented in the manual along with ten ANOPP-HSR templates. The purpose of the templates is to provide the user with working ANOPP-HSR programs which can be modified to serve particular prediction requirements. Also included in this manual is a brief discussion of common errors and how to solve these problems. The appendices include the following useful information: a summary of all ANOPP-HSR functional research modules, a data unit directory, a discussion of one of the more complex control statements, and input data unit and table examples.
NASA Astrophysics Data System (ADS)
Bukhari, W.; Hong, S.-M.
2016-03-01
The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit at the cost of reduced duty cycle. The error reduction allows the clinical target volume to planning target volume (CTV-PTV) margin to be reduced, leading to decreased normal-tissue toxicity and possible dose escalation. The CTV-PTV margin is also evaluated to quantify clinical benefits of EKF-GPRN+ prediction.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
Predicting forest insect flight activity: A Bayesian network approach
Pawson, Stephen M.; Marcot, Bruce G.; Woodberry, Owen G.
2017-01-01
Daily flight activity patterns of forest insects are influenced by temporal and meteorological conditions. Temperature and time of day are frequently cited as key drivers of activity; however, complex interactions between multiple contributing factors have also been proposed. Here, we report individual Bayesian network models to assess the probability of flight activity of three exotic insects, Hylurgus ligniperda, Hylastes ater, and Arhopalus ferus in a managed plantation forest context. Models were built from 7,144 individual hours of insect sampling, temperature, wind speed, relative humidity, photon flux density, and temporal data. Discretized meteorological and temporal variables were used to build naïve Bayes tree augmented networks. Calibration results suggested that the H. ater and A. ferus Bayesian network models had the best fit for low Type I and overall errors, and H. ligniperda had the best fit for low Type II errors. Maximum hourly temperature and time since sunrise had the largest influence on H. ligniperda flight activity predictions, whereas time of day and year had the greatest influence on H. ater and A. ferus activity. Type II model errors for the prediction of no flight activity is improved by increasing the model’s predictive threshold. Improvements in model performance can be made by further sampling, increasing the sensitivity of the flight intercept traps, and replicating sampling in other regions. Predicting insect flight informs an assessment of the potential phytosanitary risks of wood exports. Quantifying this risk allows mitigation treatments to be targeted to prevent the spread of invasive species via international trade pathways. PMID:28953904
Estimating anesthesia and surgical procedure times from medicare anesthesia claims.
Silber, Jeffrey H; Rosenbaum, Paul R; Zhang, Xuemei; Even-Shoshan, Orit
2007-02-01
Procedure times are important variables that often are included in studies of quality and efficiency. However, due to the need for costly chart review, most studies are limited to single-institution analyses. In this article, the authors describe how well the anesthesia claim from Medicare can estimate chart times. The authors abstracted information on time of induction and entrance to the recovery room ("anesthesia chart time") from the charts of 1,931 patients who underwent general and orthopedic surgical procedures in Pennsylvania. The authors then merged the associated bills from claims data supplied from Medicare (Part B data) that included a variable denoting the time in minutes for the anesthesia service. The authors also investigated the time from incision to closure ("surgical chart time") on a subset of 1,888 patients. Anesthesia claim time from Medicare was highly predictive of anesthesia chart time (Kendall's rank correlation tau = 0.85, P < 0.0001, median absolute error = 5.1 min) but somewhat less predictive of surgical chart time (Kendall's tau = 0.73, P < 0.0001, median absolute error = 13.8 min). When predicting chart time from Medicare bills, variables reflecting procedure type, comorbidities, and hospital type did not significantly improve the prediction, suggesting that errors in predicting the chart time from the anesthesia bill time are not related to these factors; however, the individual hospital did have some influence on these estimates. Anesthesia chart time can be well estimated using Medicare claims, thereby facilitating studies with vastly larger sample sizes and much lower costs of data collection.
TOPEX/POSEIDON orbit maintenance maneuver design
NASA Technical Reports Server (NTRS)
Bhat, R. S.; Frauenholz, R. B.; Cannell, Patrick E.
1990-01-01
The Ocean Topography Experiment (TOPEX/POSEIDON) mission orbit requirements are outlined, as well as its control and maneuver spacing requirements including longitude and time targeting. A ground-track prediction model dealing with geopotential, luni-solar gravity, and atmospheric-drag perturbations is considered. Targeting with all modeled perturbations is discussed, and such ground-track prediction errors as initial semimajor axis, orbit-determination, maneuver-execution, and atmospheric-density modeling errors are assessed. A longitude targeting strategy for two extreme situations is investigated employing all modeled perturbations and prediction errors. It is concluded that atmospheric-drag modeling errors are the prevailing ground-track prediction error source early in the mission during high solar flux, and that low solar-flux levels expected late in the experiment stipulate smaller maneuver magnitudes.
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Dopamine prediction error responses integrate subjective value from different reward dimensions
Lak, Armin; Stauffer, William R.; Schultz, Wolfram
2014-01-01
Prediction error signals enable us to learn through experience. These experiences include economic choices between different rewards that vary along multiple dimensions. Therefore, an ideal way to reinforce economic choice is to encode a prediction error that reflects the subjective value integrated across these reward dimensions. Previous studies demonstrated that dopamine prediction error responses reflect the value of singular reward attributes that include magnitude, probability, and delay. Obviously, preferences between rewards that vary along one dimension are completely determined by the manipulated variable. However, it is unknown whether dopamine prediction error responses reflect the subjective value integrated from different reward dimensions. Here, we measured the preferences between rewards that varied along multiple dimensions, and as such could not be ranked according to objective metrics. Monkeys chose between rewards that differed in amount, risk, and type. Because their choices were complete and transitive, the monkeys chose “as if” they integrated different rewards and attributes into a common scale of value. The prediction error responses of single dopamine neurons reflected the integrated subjective value inferred from the choices, rather than the singular reward attributes. Specifically, amount, risk, and reward type modulated dopamine responses exactly to the extent that they influenced economic choices, even when rewards were vastly different, such as liquid and food. This prediction error response could provide a direct updating signal for economic values. PMID:24453218
Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?
2017-01-01
Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692
[Research on Kalman interpolation prediction model based on micro-region PM2.5 concentration].
Wang, Wei; Zheng, Bin; Chen, Binlin; An, Yaoming; Jiang, Xiaoming; Li, Zhangyong
2018-02-01
In recent years, the pollution problem of particulate matter, especially PM2.5, is becoming more and more serious, which has attracted many people's attention from all over the world. In this paper, a Kalman prediction model combined with cubic spline interpolation is proposed, which is applied to predict the concentration of PM2.5 in the micro-regional environment of campus, and to realize interpolation simulation diagram of concentration of PM2.5 and simulate the spatial distribution of PM2.5. The experiment data are based on the environmental information monitoring system which has been set up by our laboratory. And the predicted and actual values of PM2.5 concentration data have been checked by the way of Wilcoxon signed-rank test. We find that the value of bilateral progressive significance probability was 0.527, which is much greater than the significant level α = 0.05. The mean absolute error (MEA) of Kalman prediction model was 1.8 μg/m 3 , the average relative error (MER) was 6%, and the correlation coefficient R was 0.87. Thus, the Kalman prediction model has a better effect on the prediction of concentration of PM2.5 than those of the back propagation (BP) prediction and support vector machine (SVM) prediction. In addition, with the combination of Kalman prediction model and the spline interpolation method, the spatial distribution and local pollution characteristics of PM2.5 can be simulated.
Remotely sensed rice yield prediction using multi-temporal NDVI data derived from NOAA's-AVHRR.
Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun
2013-01-01
Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha(-1). Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly.
Remotely Sensed Rice Yield Prediction Using Multi-Temporal NDVI Data Derived from NOAA's-AVHRR
Huang, Jingfeng; Wang, Xiuzhen; Li, Xinxing; Tian, Hanqin; Pan, Zhuokun
2013-01-01
Grain-yield prediction using remotely sensed data have been intensively studied in wheat and maize, but such information is limited in rice, barley, oats and soybeans. The present study proposes a new framework for rice-yield prediction, which eliminates the influence of the technology development, fertilizer application, and management improvement and can be used for the development and implementation of provincial rice-yield predictions. The technique requires the collection of remotely sensed data over an adequate time frame and a corresponding record of the region's crop yields. Longer normalized-difference-vegetation-index (NDVI) time series are preferable to shorter ones for the purposes of rice-yield prediction because the well-contrasted seasons in a longer time series provide the opportunity to build regression models with a wide application range. A regression analysis of the yield versus the year indicated an annual gain in the rice yield of 50 to 128 kg ha−1. Stepwise regression models for the remotely sensed rice-yield predictions have been developed for five typical rice-growing provinces in China. The prediction models for the remotely sensed rice yield indicated that the influences of the NDVIs on the rice yield were always positive. The association between the predicted and observed rice yields was highly significant without obvious outliers from 1982 to 2004. Independent validation found that the overall relative error is approximately 5.82%, and a majority of the relative errors were less than 5% in 2005 and 2006, depending on the study area. The proposed models can be used in an operational context to predict rice yields at the provincial level in China. The methodologies described in the present paper can be applied to any crop for which a sufficient time series of NDVI data and the corresponding historical yield information are available, as long as the historical yield increases significantly. PMID:23967112
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.
Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco
2014-11-15
The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all models were able to identify years and crops with high- and low-leaching rates. Copyright © 2014 Elsevier B.V. All rights reserved.
McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.
2016-01-01
The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
Error and Error Mitigation in Low-Coverage Genome Assemblies
Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam
2011-01-01
The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Reinhart, Robert M G; Zhu, Julia; Park, Sohee; Woodman, Geoffrey F
2015-09-02
Posterror learning, associated with medial-frontal cortical recruitment in healthy subjects, is compromised in neuropsychiatric disorders. Here we report novel evidence for the mechanisms underlying learning dysfunctions in schizophrenia. We show that, by noninvasively passing direct current through human medial-frontal cortex, we could enhance the event-related potential related to learning from mistakes (i.e., the error-related negativity), a putative index of prediction error signaling in the brain. Following this causal manipulation of brain activity, the patients learned a new task at a rate that was indistinguishable from healthy individuals. Moreover, the severity of delusions interacted with the efficacy of the stimulation to improve learning. Our results demonstrate a causal link between disrupted prediction error signaling and inefficient learning in schizophrenia. These findings also demonstrate the feasibility of nonpharmacological interventions to address cognitive deficits in neuropsychiatric disorders. When there is a difference between what we expect to happen and what we actually experience, our brains generate a prediction error signal, so that we can map stimuli to responses and predict outcomes accurately. Theories of schizophrenia implicate abnormal prediction error signaling in the cognitive deficits of the disorder. Here, we combine noninvasive brain stimulation with large-scale electrophysiological recordings to establish a causal link between faulty prediction error signaling and learning deficits in schizophrenia. We show that it is possible to improve learning rate, as well as the neural signature of prediction error signaling, in patients to a level quantitatively indistinguishable from that of healthy subjects. The results provide mechanistic insight into schizophrenia pathophysiology and suggest a future therapy for this condition. Copyright © 2015 the authors 0270-6474/15/3512232-09$15.00/0.
DeGuzman, Marisa; Shott, Megan E; Yang, Tony T; Riederer, Justin; Frank, Guido K W
2017-06-01
Anorexia nervosa is a psychiatric disorder of unknown etiology. Understanding associations between behavior and neurobiology is important in treatment development. Using a novel monetary reward task during functional magnetic resonance brain imaging, the authors tested how brain reward learning in adolescent anorexia nervosa changes with weight restoration. Female adolescents with anorexia nervosa (N=21; mean age, 16.4 years [SD=1.9]) underwent functional MRI (fMRI) before and after treatment; similarly, healthy female control adolescents (N=21; mean age, 15.2 years [SD=2.4]) underwent fMRI on two occasions. Brain function was tested using the reward prediction error construct, a computational model for reward receipt and omission related to motivation and neural dopamine responsiveness. Compared with the control group, the anorexia nervosa group exhibited greater brain response 1) for prediction error regression within the caudate, ventral caudate/nucleus accumbens, and anterior and posterior insula, 2) to unexpected reward receipt in the anterior and posterior insula, and 3) to unexpected reward omission in the caudate body. Prediction error and unexpected reward omission response tended to normalize with treatment, while unexpected reward receipt response remained significantly elevated. Greater caudate prediction error response when underweight was associated with lower weight gain during treatment. Punishment sensitivity correlated positively with ventral caudate prediction error response. Reward system responsiveness is elevated in adolescent anorexia nervosa when underweight and after weight restoration. Heightened prediction error activity in brain reward regions may represent a phenotype of adolescent anorexia nervosa that does not respond well to treatment. Prediction error response could be a neurobiological marker of illness severity that can indicate individual treatment needs.
DeGuzman, Marisa; Shott, Megan E.; Yang, Tony T.; Riederer, Justin; Frank, Guido K.W.
2017-01-01
Objective Anorexia nervosa is a psychiatric disorder of unknown etiology. Understanding associations between behavior and neurobiology is important in treatment development. Using a novel monetary reward task during functional magnetic resonance brain imaging, the authors tested how brain reward learning in adolescent anorexia nervosa changes with weight restoration. Method Female adolescents with anorexia nervosa (N=21; mean age, 15.2 years [SD=2.4]) underwent functional MRI (fMRI) before and after treatment; similarly, healthy female control adolescents (N=21; mean age, 16.4 years [SD=1.9]) underwent fMRI on two occasions. Brain function was tested using the reward prediction error construct, a computational model for reward receipt and omission related to motivation and neural dopamine responsiveness. Results Compared with the control group, the anorexia nervosa group exhibited greater brain response 1) for prediction error regression within the caudate, ventral caudate/nucleus accumbens, and anterior and posterior insula, 2) to unexpected reward receipt in the anterior and posterior insula, and 3) to unexpected reward omission in the caudate body. Prediction error and unexpected reward omission response tended to normalize with treatment, while unexpected reward receipt response remained significantly elevated. Greater caudate prediction error response when underweight was associated with lower weight gain during treatment. Punishment sensitivity correlated positively with ventral caudate prediction error response. Conclusions Reward system responsiveness is elevated in adolescent anorexia nervosa when underweight and after weight restoration. Heightened prediction error activity in brain reward regions may represent a phenotype of adolescent anorexia nervosa that does not respond well to treatment. Prediction error response could be a neurobiological marker of illness severity that can indicate individual treatment needs. PMID:28231717
Schmitter-Edgecombe, Maureen; Parsey, Carolyn M.
2014-01-01
Objective There is currently limited understanding of the course of change in everyday functioning that occurs with normal aging and dementia. To better characterize the nature of this change, we evaluated the types of errors made by participants as they performed everyday tasks in a naturalistic environment. Method Participants included cognitively healthy younger adults (YA; N = 55) and older adults (OA; N =88), and individuals with mild cognitive impairment (MCI: N =55) and dementia (N = 18). Participants performed eight scripted everyday activities (e.g., filling a medication dispenser) while under direct observation in a campus apartment. Task performances were coded for the following errors: inefficient actions, omissions, substitutions, and irrelevant actions. Results Performance accuracy decreased with age and level of cognitive impairment. Relative to the YAs, the OA group exhibited more inefficient actions which were linked to performance on neuropsychological measures of executive functioning. Relative to the OAs, the MCI group committed significantly more omission errors which were strongly linked to performance on memory measures. All error types were significantly more prominent in individuals with dementia. Omission errors uniquely predicted everyday functional status as measured by both informant-report and a performance-based measure. Conclusions These findings suggest that in the progression from healthy aging to MCI, everyday task difficulties may evolve from task inefficiencies to task omission errors, leading to inaccuracies in task completion that are recognized by knowledgeable informants. Continued decline in cognitive functioning then leads to more substantial everyday errors, which compromise ability to live independently. PMID:24933485
Checa, Purificación; Castellanos, M C; Abundis-Gutiérrez, Alicia; Rosario Rueda, M
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4-6, 7-9, and 10-13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation.
Checa, Purificación; Castellanos, M. C.; Abundis-Gutiérrez, Alicia; Rosario Rueda, M.
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4–6, 7–9, and 10–13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation. PMID:24795676
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
How many drinks did you have on September 11, 2001?
Perrine, M W Bud; Schroder, Kerstin E E
2005-07-01
This study tested the predictability of error in retrospective self-reports of alcohol consumption on September 11, 2001, among 80 Vermont light, medium and heavy drinkers. Subjects were 52 men and 28 women participating in daily self-reports of alcohol consumption for a total of 2 years, collected via interactive voice response technology (IVR). In addition, retrospective self-reports of alcohol consumption on September 11, 2001, were collected by telephone interview 4-5 days following the terrorist attacks. Retrospective error was calculated as the difference between the IVR self-report of drinking behavior on September 11 and the retrospective self-report collected by telephone interview. Retrospective error was analyzed as a function of gender and baseline drinking behavior during the 365 days preceding September 11, 2001 (termed "the baseline"). The intraclass correlation (ICC) between daily IVR and retrospective self-reports of alcohol consumption on September 11 was .80. Women provided, on average, more accurate self-reports (ICC = .96) than men (ICC = .72) but displayed more underreporting bias in retrospective responses. Amount and individual variability of alcohol consumption during the 1-year baseline explained, on average, 11% of the variance in overreporting (r = .33), 9% of the variance in underreporting (r = .30) and 25% of the variance in the overall magnitude of error (r = .50), with correlations up to .62 (r2 = .38). The size and direction of error were clearly predictable from the amount and variation in drinking behavior during the 1-year baseline period. The results demonstrate the utility and detail of information that can be derived from daily IVR self-reports in the analysis of retrospective error.
Effect of correlated observation error on parameters, predictions, and uncertainty
Tiedeman, Claire; Green, Christopher T.
2013-01-01
Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.
Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Model of human dynamic orientation. Ph.D. Thesis; [associated with vestibular stimuli
NASA Technical Reports Server (NTRS)
Ormsby, C. C.
1974-01-01
The dynamics associated with the perception of orientation were modelled for near-threshold and suprathreshold vestibular stimuli. A model of the information available at the peripheral sensors which was consistent with available neurophysiologic data was developed and served as the basis for the models of the perceptual responses. The central processor was assumed to utilize the information from the peripheral sensors in an optimal (minimum mean square error) manner to produce the perceptual estimates of dynamic orientation. This assumption, coupled with the models of sensory information, determined the form of the model for the central processor. The problem of integrating information from the semi-circular canals and the otoliths to predict the perceptual response to motions which stimulated both organs was studied. A model was developed which was shown to be useful in predicting the perceptual response to multi-sensory stimuli.
Statistical modelling of thermal annealing of fission tracks in apatite
NASA Astrophysics Data System (ADS)
Laslett, G. M.; Galbraith, R. F.
1996-12-01
We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider "fanning Arrhenius" models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen. This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix. We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.
Memory Errors Reveal a Bias to Spontaneously Generalize to Categories
Sutherland, Shelbie L.; Cimpian, Andrei; Leslie, Sarah-Jane; Gelman, Susan A.
2014-01-01
Much evidence suggests that, from a young age, humans are able to generalize information learned about a subset of a category to the category itself. Here, we propose that—beyond simply being able to perform such generalizations—people are biased to generalize to categories, such that they routinely make spontaneous, implicit category generalizations from information that licenses such generalizations. To demonstrate the existence of this bias, we asked participants to perform a task in which category generalizations would distract from the main goal of the task, leading to a characteristic pattern of errors. Specifically, participants were asked to memorize two types of novel facts: quantified facts about sets of kind members (e.g., facts about all or many stups) and generic facts about entire kinds (e.g., facts about zorbs as a kind). Moreover, half of the facts concerned properties that are typically generalizable to an animal kind (e.g., eating fruits and vegetables), and half concerned properties that are typically more idiosyncratic (e.g., getting mud in their hair). We predicted that—because of the hypothesized bias—participants would spontaneously generalize the quantified facts to the corresponding kinds, and would do so more frequently for the facts about generalizable (rather than idiosyncratic) properties. In turn, these generalizations would lead to a higher rate of quantified-to-generic memory errors for the generalizable properties. The results of four experiments (N = 449) supported this prediction. Moreover, the same generalizable-versus-idiosyncratic difference in memory errors occurred even under cognitive load, which suggests that the hypothesized bias operates unnoticed in the background, requiring few cognitive resources. In sum, this evidence suggests the presence of a powerful bias to draw generalizations about kinds. PMID:25327964
Grane, Venke Arntsberg; Endestad, Tor; Pinto, Arnfrid Farbu; Solbakk, Anne-Kristin
2014-01-01
We investigated performance-derived measures of executive control, and their relationship with self- and informant reported executive functions in everyday life, in treatment-naive adults with newly diagnosed Attention Deficit Hyperactivity Disorder (ADHD; n = 36) and in healthy controls (n = 35). Sustained attentional control and response inhibition were examined with the Test of Variables of Attention (T.O.V.A.). Delayed responses, increased reaction time variability, and higher omission error rate to Go signals in ADHD patients relative to controls indicated fluctuating levels of attention in the patients. Furthermore, an increment in NoGo commission errors when Go stimuli increased relative to NoGo stimuli suggests reduced inhibition of task-irrelevant stimuli in conditions demanding frequent responding. The ADHD group reported significantly more cognitive and behavioral executive problems than the control group on the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A). There were overall not strong associations between task performance and ratings of everyday executive function. However, for the ADHD group, T.O.V.A. omission errors predicted self-reported difficulties on the Organization of Materials scale, and commission errors predicted informant reported difficulties on the same scale. Although ADHD patients endorsed more symptoms of depression and anxiety on the Achenbach System of Empirically Based Assessment (ASEBA) than controls, ASEBA scores were not significantly associated with T.O.V.A. performance scores. Altogether, the results indicate multifaceted alteration of attentional control in adult ADHD, and accompanying subjective difficulties with several aspects of executive function in everyday living. The relationships between the two sets of data were modest, indicating that the measures represent non-redundant features of adult ADHD. PMID:25545156
Grane, Venke Arntsberg; Endestad, Tor; Pinto, Arnfrid Farbu; Solbakk, Anne-Kristin
2014-01-01
We investigated performance-derived measures of executive control, and their relationship with self- and informant reported executive functions in everyday life, in treatment-naive adults with newly diagnosed Attention Deficit Hyperactivity Disorder (ADHD; n = 36) and in healthy controls (n = 35). Sustained attentional control and response inhibition were examined with the Test of Variables of Attention (T.O.V.A.). Delayed responses, increased reaction time variability, and higher omission error rate to Go signals in ADHD patients relative to controls indicated fluctuating levels of attention in the patients. Furthermore, an increment in NoGo commission errors when Go stimuli increased relative to NoGo stimuli suggests reduced inhibition of task-irrelevant stimuli in conditions demanding frequent responding. The ADHD group reported significantly more cognitive and behavioral executive problems than the control group on the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A). There were overall not strong associations between task performance and ratings of everyday executive function. However, for the ADHD group, T.O.V.A. omission errors predicted self-reported difficulties on the Organization of Materials scale, and commission errors predicted informant reported difficulties on the same scale. Although ADHD patients endorsed more symptoms of depression and anxiety on the Achenbach System of Empirically Based Assessment (ASEBA) than controls, ASEBA scores were not significantly associated with T.O.V.A. performance scores. Altogether, the results indicate multifaceted alteration of attentional control in adult ADHD, and accompanying subjective difficulties with several aspects of executive function in everyday living. The relationships between the two sets of data were modest, indicating that the measures represent non-redundant features of adult ADHD.
Hester, Robert; Murphy, Kevin; Brown, Felicity L; Skilleter, Ashley J
2010-11-17
Punishing an error to shape subsequent performance is a major tenet of individual and societal level behavioral interventions. Recent work examining error-related neural activity has identified that the magnitude of activity in the posterior medial frontal cortex (pMFC) is predictive of learning from an error, whereby greater activity in this region predicts adaptive changes in future cognitive performance. It remains unclear how punishment influences error-related neural mechanisms to effect behavior change, particularly in key regions such as pMFC, which previous work has demonstrated to be insensitive to punishment. Using an associative learning task that provided monetary reward and punishment for recall performance, we observed that when recall errors were categorized by subsequent performance--whether the failure to accurately recall a number-location association was corrected at the next presentation of the same trial--the magnitude of error-related pMFC activity predicted future correction. However, the pMFC region was insensitive to the magnitude of punishment an error received and it was the left insula cortex that predicted learning from the most aversive outcomes. These findings add further evidence to the hypothesis that error-related pMFC activity may reflect more than a prediction error in representing the value of an outcome. The novel role identified here for the insular cortex in learning from punishment appears particularly compelling for our understanding of psychiatric and neurologic conditions that feature both insular cortex dysfunction and a diminished capacity for learning from negative feedback or punishment.
Post-processing of a low-flow forecasting system in the Thur basin (Switzerland)
NASA Astrophysics Data System (ADS)
Bogner, Konrad; Joerg-Hess, Stefanie; Bernhard, Luzi; Zappa, Massimiliano
2015-04-01
Low-flows and droughts are natural hazards with potentially severe impacts and economic loss or damage in a number of environmental and socio-economic sectors. As droughts develop slowly there is time to prepare and pre-empt some of these impacts. Real-time information and forecasting of a drought situation can therefore be an effective component of drought management. Although Switzerland has traditionally been more concerned with problems related to floods, in recent years some unprecedented low-flow situations have been experienced. Driven by the climate change debate a drought information platform has been developed to guide water resources management during situations where water resources drop below critical low-flow levels characterised by the indices duration (time between onset and offset), severity (cumulative water deficit) and magnitude (severity/duration). However to gain maximum benefit from such an information system it is essential to remove the bias from the meteorological forecast, to derive optimal estimates of the initial conditions, and to post-process the stream-flow forecasts. Quantile mapping methods for pre-processing the meteorological forecasts and improved data assimilation methods of snow measurements, which accounts for much of the seasonal stream-flow predictability for the majority of the basins in Switzerland, have been tested previously. The objective of this study is the testing of post-processing methods in order to remove bias and dispersion errors and to derive the predictive uncertainty of a calibrated low-flow forecast system. Therefore various stream-flow error correction methods with different degrees of complexity have been applied and combined with the Hydrological Uncertainty Processor (HUP) in order to minimise the differences between the observations and model predictions and to derive posterior probabilities. The complexity of the analysed error correction methods ranges from simple AR(1) models to methods including wavelet transformations and support vector machines. These methods have been combined with forecasts driven by Numerical Weather Prediction (NWP) systems with different temporal and spatial resolutions, lead-times and different numbers of ensembles covering short to medium to extended range forecasts (COSMO-LEPS, 10-15 days, monthly and seasonal ENS) as well as climatological forecasts. Additionally the suitability of various skill scores and efficiency measures regarding low-flow predictions will be tested. Amongst others the novel 2afc (2 alternatives forced choices) score and the quantile skill score and its decompositions will be applied to evaluate the probabilistic forecasts and the effects of post-processing. First results of the performance of the low-flow predictions of the hydrological model PREVAH initialised with different NWP's will be shown.
Evaluation of automated global mapping of Reference Soil Groups of WRB2015
NASA Astrophysics Data System (ADS)
Mantel, Stephan; Caspari, Thomas; Kempen, Bas; Schad, Peter; Eberhardt, Einar; Ruiperez Gonzalez, Maria
2017-04-01
SoilGrids is an automated system that provides global predictions for standard numeric soil properties at seven standard depths down to 200 cm, currently at spatial resolutions of 1km and 250m. In addition, the system provides predictions of depth to bedrock and distribution of soil classes based on WRB and USDA Soil Taxonomy (ST). In SoilGrids250m(1), soil classes (WRB, version 2006) consist of the RSG and the first prefix qualifier, whereas in SoilGrids1km(2), the soil class was assessed at RSG level. Automated mapping of World Reference Base (WRB) Reference Soil Groups (RSGs) at a global level has great advantages. Maps can be updated in a short time span with relatively little effort when new data become available. To translate soil names of older versions of FAO/WRB and national classification systems of the source data into names according to WRB 2006, correlation tables are used in SoilGrids. Soil properties and classes are predicted independently from each other. This means that the combinations of soil properties for the same cells or soil property-soil class combinations do not necessarily yield logical combinations when the map layers are studied jointly. The model prediction procedure is robust and probably has a low source of error in the prediction of RSGs. It seems that the quality of the original soil classification in the data and the use of correlation tables are the largest sources of error in mapping the RSG distribution patterns. Predicted patterns of dominant RSGs were evaluated in selected areas and sources of error were identified. Suggestions are made for improvement of WRB2015 RSG distribution predictions in SoilGrids. Keywords: Automated global mapping; World Reference Base for Soil Resources; Data evaluation; Data quality assurance References 1 Hengl T, de Jesus JM, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, et al. (2016) SoilGrids250m: global gridded soil information based on Machine Learning. Earth System Science Data (ESSD), in review. 2 Hengl T, de Jesus JM, MacMillan RA, Batjes NH, Heuvelink GBM, et al. (2014) SoilGrids1km — Global Soil Information Based on Automated Mapping. PLoS ONE 9(8): e105992. doi:10.1371/journal.pone.0105992
Time-to-contact estimation of accelerated stimuli is based on first-order information.
Benguigui, Nicolas; Ripoll, Hubert; Broderick, Michael P
2003-12-01
The goal of this study was to test whether 1st-order information, which does not account for acceleration, is used (a) to estimate the time to contact (TTC) of an accelerated stimulus after the occlusion of a final part of its trajectory and (b) to indirectly intercept an accelerated stimulus with a thrown projectile. Both tasks require the production of an action on the basis of predictive information acquired before the arrival of the stimulus at the target and allow the experimenter to make quantitative predictions about the participants' use (or nonuse) of 1st-order information. The results show that participants do not use information about acceleration and that they commit errors that rely quantitatively on 1st-order information even when acceleration is psychophysically detectable. In the indirect interceptive task, action is planned about 200 ms before the initiation of the movement, at which time the 1st-order TTC attains a critical value. ((c) 2003 APA, all rights reserved)
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
Seasonal to interannual Arctic sea ice predictability in current global climate models
NASA Astrophysics Data System (ADS)
Tietsche, S.; Day, J. J.; Guemas, V.; Hurlin, W. J.; Keeley, S. P. E.; Matei, D.; Msadek, R.; Collins, M.; Hawkins, E.
2014-02-01
We establish the first intermodel comparison of seasonal to interannual predictability of present-day Arctic climate by performing coordinated sets of idealized ensemble predictions with four state-of-the-art global climate models. For Arctic sea ice extent and volume, there is potential predictive skill for lead times of up to 3 years, and potential prediction errors have similar growth rates and magnitudes across the models. Spatial patterns of potential prediction errors differ substantially between the models, but some features are robust. Sea ice concentration errors are largest in the marginal ice zone, and in winter they are almost zero away from the ice edge. Sea ice thickness errors are amplified along the coasts of the Arctic Ocean, an effect that is dominated by sea ice advection. These results give an upper bound on the ability of current global climate models to predict important aspects of Arctic climate.
Attention in the predictive mind.
Ransom, Madeleine; Fazelpour, Sina; Mole, Christopher
2017-01-01
It has recently become popular to suggest that cognition can be explained as a process of Bayesian prediction error minimization. Some advocates of this view propose that attention should be understood as the optimization of expected precisions in the prediction-error signal (Clark, 2013, 2016; Feldman & Friston, 2010; Hohwy, 2012, 2013). This proposal successfully accounts for several attention-related phenomena. We claim that it cannot account for all of them, since there are certain forms of voluntary attention that it cannot accommodate. We therefore suggest that, although the theory of Bayesian prediction error minimization introduces some powerful tools for the explanation of mental phenomena, its advocates have been wrong to claim that Bayesian prediction error minimization is 'all the brain ever does'. Copyright © 2016 Elsevier Inc. All rights reserved.
Dopamine reward prediction errors reflect hidden state inference across time
Starkweather, Clara Kwon; Babayan, Benedicte M.; Uchida, Naoshige; Gershman, Samuel J.
2017-01-01
Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a ‘belief state’). In this work, we asked whether dopaminergic signaling supports a TD learning framework that operates over hidden states. We found that dopamine signaling exhibited a striking difference between two tasks that differed only with respect to whether reward was delivered deterministically. Our results favor an associative learning rule that combines cached values with hidden state inference. PMID:28263301
Prediction and standard error estimation for a finite universe total when a stratum is not sampled
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, T.
1994-01-01
In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time,more » the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adare, A.; Aidala, C.; Ajitanand, N. N.
2015-02-02
We present midrapidity charged-pion invariant cross sections, the ratio of the π⁻ to π⁺ cross sections and the charge-separated double-spin asymmetries in polarized p+p collisions at √s = 200 GeV. While the cross section measurements are consistent within the errors of next-to-leadingorder (NLO) perturbative quantum chromodynamics predictions (pQCD), the same calculations over estimate the ratio of the charged-pion cross sections. This discrepancy arises from the cancellation of the substantial systematic errors associated with the NLO-pQCD predictions in the ratio and highlights the constraints these data will place on flavor dependent pion fragmentation functions. Thus, the charge-separated pion asymmetries presented heremore » sample an x range of ~0.03–0.16 and provide unique information on the sign of the gluon-helicity distribution.« less
Effects of a cochlear implant simulation on immediate memory in normal-hearing adults
Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.
2012-01-01
This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807
NASA Astrophysics Data System (ADS)
Adare, A.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Akimoto, R.; Al-Ta'Ani, H.; Alexander, J.; Andrews, K. R.; Angerami, A.; Aoki, K.; Apadula, N.; Appelt, E.; Aramaki, Y.; Armendariz, R.; Aschenauer, E. C.; Atomssa, E. T.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Bannier, B.; Barish, K. N.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Baublis, V.; Baumann, C.; Bazilevsky, A.; Belmont, R.; Ben-Benjamin, J.; Bennett, R.; Blau, D. S.; Bok, J. S.; Boyle, K.; Brooks, M. L.; Broxmeyer, D.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butsyk, S.; Campbell, S.; Castera, P.; Chen, C.-H.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Conesa Del Valle, Z.; Connors, M.; Csanád, M.; Csörgő, T.; Dairaku, S.; Datta, A.; David, G.; Dayananda, M. K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Dion, A.; Donadelli, M.; Drapier, O.; Drees, A.; Drees, K. A.; Durham, J. M.; Durum, A.; D'Orazio, L.; Efremenko, Y. V.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Fadem, B.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fukao, Y.; Fusayasu, T.; Gal, C.; Garishvili, I.; Giordano, F.; Glenn, A.; Gong, X.; Gonin, M.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Gunji, T.; Guo, L.; Gustafsson, H.-Å.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Han, R.; Hanks, J.; Harper, C.; Hashimoto, K.; Haslum, E.; Hayano, R.; He, X.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hollis, R. S.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hori, Y.; Hornback, D.; Huang, S.; Ichihara, T.; Ichimiya, R.; Iinuma, H.; Ikeda, Y.; Imai, K.; Inaba, M.; Iordanova, A.; Isenhower, D.; Ishihara, M.; Issah, M.; Ivanischev, D.; Iwanaga, Y.; Jacak, B. V.; Jia, J.; Jiang, X.; John, D.; Johnson, B. M.; Jones, T.; Joo, K. S.; Jouan, D.; Kamin, J.; Kaneti, S.; Kang, B. H.; Kang, J. H.; Kang, J. S.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kazantsev, A. V.; Kempel, T.; Khanzadeev, A.; Kijima, K. M.; Kim, B. I.; Kim, D. J.; Kim, E.-J.; Kim, Y.-J.; Kim, Y. K.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kleinjan, D.; Kline, P.; Kochenda, L.; Komkov, B.; Konno, M.; Koster, J.; Kotov, D.; Král, A.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, D. M.; Lee, J.; Lee, K. B.; Lee, K. S.; Lee, S. H.; Lee, S. R.; Leitch, M. J.; Leite, M. A. L.; Li, X.; Lim, S. H.; Linden Levy, L. A.; Liu, H.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Manion, A.; Manko, V. I.; Mannel, E.; Mao, Y.; Masui, H.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Means, N.; Mendoza, M.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Miki, K.; Milov, A.; Mitchell, J. T.; Miyachi, Y.; Mohanty, A. K.; Moon, H. J.; Morino, Y.; Morreale, A.; Morrison, D. P.; Motschwiller, S.; Moukhanova, T. V.; Murakami, T.; Murata, J.; Nagamiya, S.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Newby, J.; Nguyen, M.; Nihashi, M.; Nouicer, R.; Nyanin, A. S.; Oakley, C.; O'Brien, E.; Ogilvie, C. A.; Oka, M.; Okada, K.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pantuev, V.; Papavassiliou, V.; Park, B. H.; Park, I. H.; Park, S. K.; Pate, S. F.; Patel, L.; Pei, H.; Peng, J.-C.; Pereira, H.; Peressounko, D. Yu.; Petti, R.; Pinkenburg, C.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Qu, H.; Rak, J.; Ravinovich, I.; Read, K. F.; Reygers, K.; Riabov, V.; Riabov, Y.; Richardson, E.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rosendahl, S. S. E.; Rubin, J. G.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Samsonov, V.; Sano, S.; Sarsour, M.; Sato, T.; Savastio, M.; Sawada, S.; Sedgwick, K.; Seidl, R.; Seto, R.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shim, H. H.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Slunečka, M.; Sodre, T.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stenlund, E.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sun, J.; Sziklai, J.; Takagui, E. M.; Takahara, A.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tennant, E.; Themann, H.; Thomas, D.; Togawa, M.; Tomášek, L.; Tomášek, M.; Torii, H.; Towell, R. S.; Tserruya, I.; Tsuchimoto, Y.; Utsunomiya, K.; Vale, C.; van Hecke, H. W.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Virius, M.; Vossen, A.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Watanabe, Y. S.; Wei, F.; Wei, R.; Wessels, J.; White, S. N.; Winter, D.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Yamaguchi, Y. L.; Yang, R.; Yanovich, A.; Ying, J.; Yokkaichi, S.; Yoo, J. S.; You, Z.; Young, G. R.; Younus, I.; Yushmanov, I. E.; Zajc, W. A.; Zelenski, A.; Zhou, S.; Phenix Collaboration
2015-02-01
We present midrapidity charged-pion invariant cross sections, the ratio of the π- to π+ cross sections and the charge-separated double-spin asymmetries in polarized p +p collisions at √{s }=200 GeV . While the cross section measurements are consistent within the errors of next-to-leading-order (NLO) perturbative quantum chromodynamics predictions (pQCD), the same calculations overestimate the ratio of the charged-pion cross sections. This discrepancy arises from the cancellation of the substantial systematic errors associated with the NLO-pQCD predictions in the ratio and highlights the constraints these data will place on flavor-dependent pion fragmentation functions. The charge-separated pion asymmetries presented here sample an x range of ˜0.03 - 0.16 and provide unique information on the sign of the gluon-helicity distribution.
Hedging Your Bets by Learning Reward Correlations in the Human Brain
Wunderlich, Klaus; Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.
2011-01-01
Summary Human subjects are proficient at tracking the mean and variance of rewards and updating these via prediction errors. Here, we addressed whether humans can also learn about higher-order relationships between distinct environmental outcomes, a defining ecological feature of contexts where multiple sources of rewards are available. By manipulating the degree to which distinct outcomes are correlated, we show that subjects implemented an explicit model-based strategy to learn the associated outcome correlations and were adept in using that information to dynamically adjust their choices in a task that required a minimization of outcome variance. Importantly, the experimentally generated outcome correlations were explicitly represented neuronally in right midinsula with a learning prediction error signal expressed in rostral anterior cingulate cortex. Thus, our data show that the human brain represents higher-order correlation structures between rewards, a core adaptive ability whose immediate benefit is optimized sampling. PMID:21943609
Dopamine reward prediction errors reflect hidden-state inference across time.
Starkweather, Clara Kwon; Babayan, Benedicte M; Uchida, Naoshige; Gershman, Samuel J
2017-04-01
Midbrain dopamine neurons signal reward prediction error (RPE), or actual minus expected reward. The temporal difference (TD) learning model has been a cornerstone in understanding how dopamine RPEs could drive associative learning. Classically, TD learning imparts value to features that serially track elapsed time relative to observable stimuli. In the real world, however, sensory stimuli provide ambiguous information about the hidden state of the environment, leading to the proposal that TD learning might instead compute a value signal based on an inferred distribution of hidden states (a 'belief state'). Here we asked whether dopaminergic signaling supports a TD learning framework that operates over hidden states. We found that dopamine signaling showed a notable difference between two tasks that differed only with respect to whether reward was delivered in a deterministic manner. Our results favor an associative learning rule that combines cached values with hidden-state inference.
Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela
2018-01-01
This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.
Local-search based prediction of medical image registration error
NASA Astrophysics Data System (ADS)
Saygili, Görkem
2018-03-01
Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2012-01-01
The cerebellum has been implicated in processing motor errors required for online control of movement and motor learning. The dominant view is that Purkinje cell complex spike discharge signals motor errors. This study investigated whether errors are encoded in the simple spike discharge of Purkinje cells in monkeys trained to manually track a pseudo-randomly moving target. Four task error signals were evaluated based on cursor movement relative to target movement. Linear regression analyses based on firing residuals ensured that the modulation with a specific error parameter was independent of the other error parameters and kinematics. The results demonstrate that simple spike firing in lobules IV–VI is significantly correlated with position, distance and directional errors. Independent of the error signals, the same Purkinje cells encode kinematics. The strongest error modulation occurs at feedback timing. However, in 72% of cells at least one of the R2 temporal profiles resulting from regressing firing with individual errors exhibit two peak R2 values. For these bimodal profiles, the first peak is at a negative τ (lead) and a second peak at a positive τ (lag), implying that Purkinje cells encode both prediction and feedback about an error. For the majority of the bimodal profiles, the signs of the regression coefficients or preferred directions reverse at the times of the peaks. The sign reversal results in opposing simple spike modulation for the predictive and feedback components. Dual error representations may provide the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:23115173
Software Requirements Analysis as Fault Predictor
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Waiting until the integration and system test phase to discover errors leads to more costly rework than resolving those same errors earlier in the lifecycle. Costs increase even more significantly once a software system has become operational. WE can assess the quality of system requirements, but do little to correlate this information either to system assurance activities or long-term reliability projections - both of which remain unclear and anecdotal. Extending earlier work on requirements accomplished by the ARM tool, measuring requirements quality information against code complexity and test data for the same system may be used to predict specific software modules containing high impact or deeply embedded faults now escaping in operational systems. Such knowledge would lead to more effective and efficient test programs. It may enable insight into whether a program should be maintained or started over.
An MEG signature corresponding to an axiomatic model of reward prediction error.
Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J
2012-01-02
Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. Copyright © 2011 Elsevier Inc. All rights reserved.
Temporal scaling in information propagation.
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-18
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Temporal scaling in information propagation
NASA Astrophysics Data System (ADS)
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Armstrong, Bonnie; Spaniol, Julia; Persaud, Nav
2018-02-13
Clinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an 'experience format') would promote more accurate PPV and NPV estimates compared with a numerical format. Participants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design. The study was completed online, via Qualtrics (Provo, Utah, USA). 50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael's Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency. Estimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%, d =0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%, d =0.303, P=0.015). Exposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Dell, Gary S.; Martin, Nadine; Schwartz, Myrna F.
2010-01-01
Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error patterns of 65 aphasic subjects from their naming errors. The model’s characterizations of the subjects’ naming errors were taken from the companion paper to this one (Schwartz, Dell, N. Martin, Gahl & Sobel, 2006), and their repetition was predicted from the model on the assumption that naming involves two error prone steps, word and phonological retrieval, whereas repetition only creates errors in the second of these steps. A version of the model in which lexical-semantic and lexical-phonological connections could be independently lesioned was generally successful in predicting repetition for the aphasics. An analysis of the few cases in which model predictions were inaccurate revealed the role of input phonology in the repetition task. PMID:21085621
Thalamocortical Dysrhythmia: A Theoretical Update in Tinnitus
De Ridder, Dirk; Vanneste, Sven; Langguth, Berthold; Llinas, Rodolfo
2015-01-01
Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Pathophysiologically it has been attributed to bottom-up deafferentation and/or top-down noise-cancelling deficit. Both mechanisms are proposed to alter auditory thalamocortical signal transmission, resulting in thalamocortical dysrhythmia (TCD). In deafferentation, TCD is characterized by a slowing down of resting state alpha to theta activity associated with an increase in surrounding gamma activity, resulting in persisting cross-frequency coupling between theta and gamma activity. Theta burst-firing increases network synchrony and recruitment, a mechanism, which might enable long-range synchrony, which in turn could represent a means for finding the missing thalamocortical information and for gaining access to consciousness. Theta oscillations could function as a carrier wave to integrate the tinnitus-related focal auditory gamma activity in a consciousness enabling network, as envisioned by the global workspace model. This model suggests that focal activity in the brain does not reach consciousness, except if the focal activity becomes functionally coupled to a consciousness enabling network, aka the global workspace. In limited deafferentation, the missing information can be retrieved from the auditory cortical neighborhood, decreasing surround inhibition, resulting in TCD. When the deafferentation is too wide in bandwidth, it is hypothesized that the missing information is retrieved from theta-mediated parahippocampal auditory memory. This suggests that based on the amount of deafferentation TCD might change to parahippocampocortical persisting and thus pathological theta–gamma rhythm. From a Bayesian point of view, in which the brain is conceived as a prediction machine that updates its memory-based predictions through sensory updating, tinnitus is the result of a prediction error between the predicted and sensed auditory input. The decrease in sensory updating is reflected by decreased alpha activity and the prediction error results in theta–gamma and beta–gamma coupling. Thus, TCD can be considered as an adaptive mechanism to retrieve missing auditory input in tinnitus. PMID:26106362
Skill assessment of Korea operational oceanographic system (KOOS)
NASA Astrophysics Data System (ADS)
Kim, J.; Park, K.
2016-02-01
For the ocean forecast system in Korea, the Korea operational oceanographic system (KOOS) has been developed and pre-operated since 2009 by the Korea institute of ocean science and technology (KIOST) funded by the Korean government. KOOS provides real time information and forecasts for marine environmental conditions in order to support all kinds of activities in the sea. Furthermore, more significant purpose of the KOOS information is to response and support to maritime problems and accidents such as oil spill, red-tide, shipwreck, extraordinary wave, coastal inundation and so on. Accordingly, it is essential to evaluate prediction accuracy and efforts to improve accuracy. The forecast accuracy should meet or exceed target benchmarks before its products are approved for release to the public.In this paper, we conduct error quantification of the forecasts using skill assessment technique for judgement of the KOOS performance. Skill assessment statistics includes the measures of errors and correlations such as root-mean-square-error (RMSE), mean bias (MB), correlation coefficient (R), and index of agreement (IOA) and the frequency with which errors lie within specified limits termed the central frequency (CF).The KOOS provides 72-hour daily forecast data such as air pressure, wind, water elevation, currents, wave, water temperature, and salinity produced by meteorological and hydrodynamic numerical models of WRF, ROMS, MOM5, WAM, WW3, and MOHID. The skill assessment has been performed through comparison of model results with in-situ observation data (Figure 1) for the period from 1 July, 2010 to 31 March, 2015 in Table 1 and model errors have been quantified with skill scores and CF determined by acceptable criteria depending on predicted variables (Table 2). Moreover, we conducted quantitative evaluation of spatio-temporal pattern correlation between numerical models and observation data such as sea surface temperature (SST) and sea surface current produced by ocean sensor in satellites and high frequency (HF) radar, respectively. Those quantified errors can allow to objective assessment of the KOOS performance and used can reveal different aspects of model inefficiency. Based on these results, various model components are tested and developed in order to improve forecast accuracy.
Vlasceanu, Madalina; Drach, Rae; Coman, Alin
2018-05-03
The mind is a prediction machine. In most situations, it has expectations as to what might happen. But when predictions are invalidated by experience (i.e., prediction errors), the memories that generate these predictions are suppressed. Here, we explore the effect of prediction error on listeners' memories following social interaction. We find that listening to a speaker recounting experiences similar to one's own triggers prediction errors on the part of the listener that lead to the suppression of her memories. This effect, we show, is sensitive to a perspective-taking manipulation, such that individuals who are instructed to take the perspective of the speaker experience memory suppression, whereas individuals who undergo a low-perspective-taking manipulation fail to show a mnemonic suppression effect. We discuss the relevance of these findings for our understanding of the bidirectional influences between cognition and social contexts, as well as for the real-world situations that involve memory-based predictions.
The effect of bathymetric filtering on nearshore process model results
Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.
2009-01-01
Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.
An Interoceptive Predictive Coding Model of Conscious Presence
Seth, Anil K.; Suzuki, Keisuke; Critchley, Hugo D.
2011-01-01
We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness. PMID:22291673
Ruiz, María Herrojo; Strübing, Felix; Jabusch, Hans-Christian; Altenmüller, Eckart
2011-04-15
Skilled performance requires the ability to monitor ongoing behavior, detect errors in advance and modify the performance accordingly. The acquisition of fast predictive mechanisms might be possible due to the extensive training characterizing expertise performance. Recent EEG studies on piano performance reported a negative event-related potential (ERP) triggered in the ACC 70 ms before performance errors (pitch errors due to incorrect keypress). This ERP component, termed pre-error related negativity (pre-ERN), was assumed to reflect processes of error detection in advance. However, some questions remained to be addressed: (i) Does the electrophysiological marker prior to errors reflect an error signal itself or is it related instead to the implementation of control mechanisms? (ii) Does the posterior frontomedial cortex (pFMC, including ACC) interact with other brain regions to implement control adjustments following motor prediction of an upcoming error? (iii) Can we gain insight into the electrophysiological correlates of error prediction and control by assessing the local neuronal synchronization and phase interaction among neuronal populations? (iv) Finally, are error detection and control mechanisms defective in pianists with musician's dystonia (MD), a focal task-specific dystonia resulting from dysfunction of the basal ganglia-thalamic-frontal circuits? Consequently, we investigated the EEG oscillatory and phase synchronization correlates of error detection and control during piano performances in healthy pianists and in a group of pianists with MD. In healthy pianists, the main outcomes were increased pre-error theta and beta band oscillations over the pFMC and 13-15 Hz phase synchronization, between the pFMC and the right lateral prefrontal cortex, which predicted corrective mechanisms. In MD patients, the pattern of phase synchronization appeared in a different frequency band (6-8 Hz) and correlated with the severity of the disorder. The present findings shed new light on the neural mechanisms, which might implement motor prediction by means of forward control processes, as they function in healthy pianists and in their altered form in patients with MD. Copyright © 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wissel, Tobias, E-mail: wissel@rob.uni-luebeck.de; Graduate School for Computing in Medicine and Life Science, University of Lübeck, Lübeck; Stüber, Patrick
2016-06-01
Purpose: To support surface registration in cranial radiation therapy by structural information. The risk for spatial ambiguities is minimized by using tissue thickness variations predicted from backscattered near-infrared (NIR) light from the forehead. Methods and Materials: In a pilot study we recorded NIR surface scans by laser triangulation from 30 volunteers of different skin type. A ground truth for the soft-tissue thickness was segmented from MR scans. After initially matching the NIR scans to the MR reference, Gaussian processes were trained to predict tissue thicknesses from NIR backscatter. Moreover, motion starting from this initial registration was simulated by 5000 randommore » transformations of the NIR scan away from the MR reference. Re-registration to the MR scan was compared with and without tissue thickness support. Results: By adding prior knowledge to the backscatter features, such as incident angle and neighborhood information in the scanning grid, we showed that tissue thickness can be predicted with mean errors of <0.2 mm, irrespective of the skin type. With this additional information, the average registration error improved from 3.4 mm to 0.48 mm by a factor of 7. Misalignments of more than 1 mm were almost thoroughly (98.9%) pushed below 1 mm. Conclusions: For almost all cases tissue-enhanced matching achieved better results than purely spatial registration. Ambiguities can be minimized if the cutaneous structures do not agree. This valuable support for surface registration increases tracking robustness and avoids misalignment of tumor targets far from the registration site.« less
An assessment of air pollutant exposure methods in Mexico City, Mexico.
Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S
2015-05-01
Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.
Applying EVM to Satellite on Ground and In-Orbit Testing - Better Data in Less Time
NASA Technical Reports Server (NTRS)
Peters, Robert; Lebbink, Elizabeth-Klein; Lee, Victor; Model, Josh; Wezalis, Robert; Taylor, John
2008-01-01
Using Error Vector Magnitude (EVM) in satellite integration and test allows rapid verification of the Bit Error Rate (BER) performance of a satellite link and is particularly well suited to measurement of low bit rate satellite links where it can result in a major reduction in test time (about 3 weeks per satellite for the Geosynchronous Operational Environmental Satellite [GOES] satellites during ground test) and can provide diagnostic information. Empirical techniques developed to predict BER performance from EVM measurements and lessons learned about applying these techniques during GOES N, O, and P integration test and post launch testing, are discussed.
Criticality of Adaptive Control Dynamics
NASA Astrophysics Data System (ADS)
Patzelt, Felix; Pawelzik, Klaus
2011-12-01
We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1980-01-01
A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.
Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ming; Cygler,
The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less
ERIC Educational Resources Information Center
Lee, Hongjoo J.; Gallagher, Michela; Holland, Peter C.
2010-01-01
The central amygdala nucleus (CeA) plays a critical role in cognitive processes beyond fear conditioning. For example, intact CeA function is essential for enhancing attention to conditioned stimuli (CSs). Furthermore, this enhanced attention depends on the CeA's connections to the nigrostriatal system. In the current study, we examined the role…
The Role of Testimony in Young Children's Solution of a Gravity-Driven Invisible Displacement Task
ERIC Educational Resources Information Center
Bascandziev, Igor; Harris, Paul L.
2010-01-01
Previous research has shown that young children make a perseverative, gravity-oriented, error when asked to predict the final location of a ball dropped down an S-shaped opaque tube (Hood, 1995). We asked if providing children with verbal information concerning the role that the tubes play, in determining the ball's trajectory would improve their…
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
Peterman, Robert J; Jiang, Shuying; Johe, Rene; Mukherjee, Padma M
2016-12-01
Dolphin® visual treatment objective (VTO) prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging's VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Dolphin Imaging's software was determined to be accurate within an error range of +/- 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.
Naturalistic distraction and driving safety in older drivers.
Aksan, Nazan; Dawson, Jeffrey D; Emerson, Jamie L; Yu, Lixi; Uc, Ergun Y; Anderson, Steven W; Rizzo, Matthew
2013-08-01
In this study, we aimed to quantify and compare performance of middle-aged and older drivers during a naturalistic distraction paradigm (visual search for roadside targets) and to predict older drivers performance given functioning in visual, motor, and cognitive domains. Distracted driving can imperil healthy adults and may disproportionally affect the safety of older drivers with visual, motor, and cognitive decline. A total of 203 drivers, 120 healthy older (61 men and 59 women, ages 65 years and older) and 83 middle-aged drivers (38 men and 45 women, ages 40 to 64 years), participated in an on-road test in an instrumented vehicle. Outcome measures included performance in roadside target identification (traffic signs and restaurants) and concurrent driver safety. Differences in visual, motor, and cognitive functioning served as predictors. Older drivers identified fewer landmarks and drove slower but committed more safety errors than did middle-aged drivers. Greater familiarity with local roads benefited performance of middle-aged but not older drivers.Visual cognition predicted both traffic sign identification and safety errors, and executive function predicted traffic sign identification over and above vision. Older adults are susceptible to driving safety errors while distracted by common secondary visual search tasks that are inherent to driving. The findings underscore that age-related cognitive decline affects older drivers' management of driving tasks at multiple levels and can help inform the design of on-road tests and interventions for older drivers.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
The fate of memory: Reconsolidation and the case of Prediction Error.
Fernández, Rodrigo S; Boccia, Mariano M; Pedreira, María E
2016-09-01
The ability to make predictions based on stored information is a general coding strategy. A Prediction-Error (PE) is a mismatch between expected and current events. It was proposed as the process by which memories are acquired. But, our memories like ourselves are subject to change. Thus, an acquired memory can become active and update its content or strength by a labilization-reconsolidation process. Within the reconsolidation framework, PE drives the updating of consolidated memories. Moreover, memory features, such as strength and age, are crucial boundary conditions that limit the initiation of the reconsolidation process. In order to disentangle these boundary conditions, we review the role of surprise, classical models of conditioning, and their neural correlates. Several forms of PE were found to be capable of inducing memory labilization-reconsolidation. Notably, many of the PE findings mirror those of memory-reconsolidation, suggesting a strong link between these signals and memory process. Altogether, the aim of the present work is to integrate a psychological and neuroscientific analysis of PE into a general framework for memory-reconsolidation. Copyright © 2016 Elsevier Ltd. All rights reserved.
3D foot shape generation from 2D information.
Luximon, Ameersing; Goonetilleke, Ravindra S; Zhang, Ming
2005-05-15
Two methods to generate an individual 3D foot shape from 2D information are proposed. A standard foot shape was first generated and then scaled based on known 2D information. In the first method, the foot outline and the foot height were used, and in the second, the foot outline and the foot profile were used. The models were developed using 40 participants and then validated using a different set of 40 participants. Results show that each individual foot shape can be predicted within a mean absolute error of 1.36 mm for the left foot and 1.37 mm for the right foot using the first method, and within a mean absolute error of 1.02 mm for the left foot and 1.02 mm for the right foot using the second method. The second method shows somewhat improved accuracy even though it requires two images. Both the methods are relatively cheaper than using a scanner to determine the 3D foot shape for custom footwear design.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
Complementary roles for amygdala and periaqueductal gray in temporal-difference fear learning.
Cole, Sindy; McNally, Gavan P
2009-01-01
Pavlovian fear conditioning is not a unitary process. At the neurobiological level multiple brain regions and neurotransmitters contribute to fear learning. At the behavioral level many variables contribute to fear learning including the physical salience of the events being learned about, the direction and magnitude of predictive error, and the rate at which these are learned about. These experiments used a serial compound conditioning design to determine the roles of basolateral amygdala (BLA) NMDA receptors and ventrolateral midbrain periaqueductal gray (vlPAG) mu-opioid receptors (MOR) in predictive fear learning. Rats received a three-stage design, which arranged for both positive and negative prediction errors producing bidirectional changes in fear learning within the same subjects during the test stage. Intra-BLA infusion of the NR2B receptor antagonist Ifenprodil prevented all learning. In contrast, intra-vlPAG infusion of the MOR antagonist CTAP enhanced learning in response to positive predictive error but impaired learning in response to negative predictive error--a pattern similar to Hebbian learning and an indication that fear learning had been divorced from predictive error. These findings identify complementary but dissociable roles for amygdala NMDA receptors and vlPAG MOR in temporal-difference predictive fear learning.
Laws of attraction: from perceptual forces to conceptual similarity.
Ziemkiewicz, Caroline; Kosara, Robert
2010-01-01
Many of the pressing questions in information visualization deal with how exactly a user reads a collection of visual marks as information about relationships between entities. Previous research has suggested that people see parts of a visualization as objects, and may metaphorically interpret apparent physical relationships between these objects as suggestive of data relationships. We explored this hypothesis in detail in a series of user experiments. Inspired by the concept of implied dynamics in psychology, we first studied whether perceived gravity acting on a mark in a scatterplot can lead to errors in a participant's recall of the mark's position. The results of this study suggested that such position errors exist, but may be more strongly influenced by attraction between marks. We hypothesized that such apparent attraction may be influenced by elements used to suggest relationship between objects, such as connecting lines, grouping elements, and visual similarity. We further studied what visual elements are most likely to cause this attraction effect, and whether the elements that best predicted attraction errors were also those which suggested conceptual relationships most strongly. Our findings show a correlation between attraction errors and intuitions about relatedness, pointing towards a possible mechanism by which the perception of visual marks becomes an interpretation of data relationships.
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
Wong, Aaron L; Shelhamer, Mark
2014-05-01
Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.
Disambiguating ventral striatum fMRI-related bold signal during reward prediction in schizophrenia
Morris, R W; Vercammen, A; Lenroot, R; Moore, L; Langton, J M; Short, B; Kulkarni, J; Curtis, J; O'Donnell, M; Weickert, C S; Weickert, T W
2012-01-01
Reward detection, surprise detection and prediction-error signaling have all been proposed as roles for the ventral striatum (vStr). Previous neuroimaging studies of striatal function in schizophrenia have found attenuated neural responses to reward-related prediction errors; however, as prediction errors represent a discrepancy in mesolimbic neural activity between expected and actual events, it is critical to examine responses to both expected and unexpected rewards (URs) in conjunction with expected and UR omissions in order to clarify the nature of ventral striatal dysfunction in schizophrenia. In the present study, healthy adults and people with schizophrenia were tested with a reward-related prediction-error task during functional magnetic resonance imaging to determine whether schizophrenia is associated with altered neural responses in the vStr to rewards, surprise prediction errors or all three factors. In healthy adults, we found neural responses in the vStr were correlated more specifically with prediction errors than to surprising events or reward stimuli alone. People with schizophrenia did not display the normal differential activation between expected and URs, which was partially due to exaggerated ventral striatal responses to expected rewards (right vStr) but also included blunted responses to unexpected outcomes (left vStr). This finding shows that neural responses, which typically are elicited by surprise, can also occur to well-predicted events in schizophrenia and identifies aberrant activity in the vStr as a key node of dysfunction in the neural circuitry used to differentiate expected and unexpected feedback in schizophrenia. PMID:21709684
Sun, Libo; Wan, Ying
2018-04-22
Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.
De Vries, A; Feleke, S
2008-12-01
This study assessed the accuracy of 3 methods that predict the uniform milk price in Federal Milk Marketing Order 6 (Florida). Predictions were made for 1 to 12 mo into the future. Data were from January 2003 to May 2007. The CURRENT method assumed that future uniform milk prices were equal to the last announced uniform milk price. The F+BASIS and F+UTIL methods were based on the milk futures markets because the futures prices reflect the market's expectation of the class III and class IV cash prices that are announced monthly by USDA. The F+BASIS method added an exponentially weighted moving average of the difference between the class III cash price and the historical uniform milk price (also known as basis) to the class III futures price. The F+UTIL method used the class III and class IV futures prices, the most recently announced butter price, and historical utilizations to predict the skim milk prices, butterfat prices, and utilizations in all 4 classes. Predictions of future utilizations were made with a Holt-Winters smoothing method. Federal Milk Marketing Order 6 had high class I utilization (85 +/- 4.8%). Mean and standard deviation of the class III and class IV cash prices were $13.39 +/- 2.40/cwt (1 cwt = 45.36 kg) and $12.06 +/- 1.80/cwt, respectively. The actual uniform price in Tampa, Florida, was $16.62 +/- 2.16/cwt. The basis was $3.23 +/- 1.23/cwt. The F+BASIS and F+UTIL predictions were generally too low during the period considered because the class III cash prices were greater than the corresponding class III futures prices. For the 1- to 6-mo-ahead predictions, the root of the mean squared prediction errors from the F+BASIS method were $1.12, $1.20, $1.55, $1.91, $2.16, and $2.34/cwt, respectively. The root of the mean squared prediction errors ranged from $2.50 to $2.73/cwt for predictions up to 12 mo ahead. Results from the F+UTIL method were similar. The accuracies of the F+BASIS and F+UTIL methods for all 12 fore-cast horizons were not significantly different. Application of the modified Mariano-Diebold tests showed that no method included all the information contained in the other methods. In conclusion, both F+BASIS and F+UTIL methods tended to more accurately predict the future uniform milk prices than the CURRENT method, but prediction errors could be substantial even a few months into the future. The majority of the prediction error was caused by the inefficiency of the futures markets to predict the class III cash prices.
2016-01-01
Modeling and prediction of polar organic chemical integrative sampler (POCIS) sampling rates (Rs) for 73 compounds using artificial neural networks (ANNs) is presented for the first time. Two models were constructed: the first was developed ab initio using a genetic algorithm (GSD-model) to shortlist 24 descriptors covering constitutional, topological, geometrical and physicochemical properties and the second model was adapted for Rs prediction from a previous chromatographic retention model (RTD-model). Mechanistic evaluation of descriptors showed that models did not require comprehensive a priori information to predict Rs. Average predicted errors for the verification and blind test sets were 0.03 ± 0.02 L d–1 (RTD-model) and 0.03 ± 0.03 L d–1 (GSD-model) relative to experimentally determined Rs. Prediction variability in replicated models was the same or less than for measured Rs. Networks were externally validated using a measured Rs data set of six benzodiazepines. The RTD-model performed best in comparison to the GSD-model for these compounds (average absolute errors of 0.0145 ± 0.008 L d–1 and 0.0437 ± 0.02 L d–1, respectively). Improvements to generalizability of modeling approaches will be reliant on the need for standardized guidelines for Rs measurement. The use of in silico tools for Rs determination represents a more economical approach than laboratory calibrations. PMID:27363449
Flow Mapping Based on the Motion-Integration Errors of Autonomous Underwater Vehicles
NASA Astrophysics Data System (ADS)
Chang, D.; Edwards, C. R.; Zhang, F.
2016-02-01
Knowledge of a flow field is crucial in the navigation of autonomous underwater vehicles (AUVs) since the motion of AUVs is affected by ambient flow. Due to the imperfect knowledge of the flow field, it is typical to observe a difference between the actual and predicted trajectories of an AUV, which is referred to as a motion-integration error (also known as a dead-reckoning error if an AUV navigates via dead-reckoning). The motion-integration error has been essential for an underwater glider to compute its flow estimate from the travel information of the last leg and to improve navigation performance by using the estimate for the next leg. However, the estimate by nature exhibits a phase difference compared to ambient flow experienced by gliders, prohibiting its application in a flow field with strong temporal and spatial gradients. In our study, to mitigate the phase problem, we have developed a local ocean model by combining the flow estimate based on the motion-integration error with flow predictions from a tidal ocean model. Our model has been used to create desired trajectories of gliders for guidance. Our method is validated by Long Bay experiments in 2012 and 2013 in which we deployed multiple gliders on the shelf of South Atlantic Bight and near the edge of Gulf Stream. In our recent study, the application of the motion-integration error is further extended to create a spatial flow map. Considering that the motion-integration errors of AUVs accumulate along their trajectories, the motion-integration error is formulated as a line integral of ambient flow which is then reformulated into algebraic equations. By solving an inverse problem for these algebraic equations, we obtain the knowledge of such flow in near real time, allowing more effective and precise guidance of AUVs in a dynamic environment. This method is referred to as motion tomography. We provide the results of non-parametric and parametric flow mapping from both simulated and experimental data.
Method and apparatus for sensor fusion
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)
1991-01-01
Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.
Hoffman, Paul; Jefferies, Elizabeth; Ralph, Matthew A Lambon
2011-02-01
More efficient processing of high frequency (HF) words is a ubiquitous finding in healthy individuals, yet frequency effects are often small or absent in stroke aphasia. We propose that some patients fail to show the expected frequency effect because processing of HF words places strong demands on semantic control and regulation processes, counteracting the usual effect. This may occur because HF words appear in a wide range of linguistic contexts, each associated with distinct semantic information. This theory predicts that in extreme circumstances, patients with impaired semantic control should show an outright reversal of the normal frequency effect. To test this prediction, we tested two patients with impaired semantic control with a delayed repetition task that emphasised activation of semantic representations. By alternating HF and low frequency (LF) trials, we demonstrated a significant repetition advantage for LF words, principally because of perseverative errors in which patients produced the previous LF response in place of the HF target. These errors indicated that HF words were more weakly activated than LF words. We suggest that when presented with no contextual information, patients generate a weak and unstable pattern of semantic activation for HF words because information relating to many possible contexts and interpretations is activated. In contrast, LF words are associated with more stable patterns of activation because similar semantic information is activated whenever they are encountered. Copyright © 2011 Elsevier Ltd. All rights reserved.
Morphodynamic data assimilation used to understand changing coasts
Plant, Nathaniel G.; Long, Joseph W.
2015-01-01
Morphodynamic data assimilation blends observations with model predictions and comes in many forms, including linear regression, Kalman filter, brute-force parameter estimation, variational assimilation, and Bayesian analysis. Importantly, data assimilation can be used to identify sources of prediction errors that lead to improved fundamental understanding. Overall, models incorporating data assimilation yield better information to the people who must make decisions impacting safety and wellbeing in coastal regions that experience hazards due to storms, sea-level rise, and erosion. We present examples of data assimilation associated with morphologic change. We conclude that enough morphodynamic predictive capability is available now to be useful to people, and that we will increase our understanding and the level of detail of our predictions through assimilation of observations and numerical-statistical models.
Feaster, Toby D.; Tasker, Gary D.
2002-01-01
Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique relation between flow and basin characteristics for an individual watershed. This, then, can be used to estimate flows at ungaged sites. Because the computations required for this method are somewhat complex, a computer application was developed that performs the computations and compares the predictive errors for this method. The computer application includes the option of using the region-of-influence method, or the generalized least squares regression equations from this report to compute estimated flows and errors of prediction specific to each ungaged site. From a comparison of predictive errors using the region-of-influence method with those computed using the regional regression method, the region-of-influence method performed systematically better only in the Blue Ridge and is, therefore, not recommended for use in the other physiographic provinces. Peak-flow data for the South Carolina stations used in the regionalization study are provided in appendix A, which contains gaging station information, log-Pearson Type III statistics, information on stage-flow relations, and water-year peak stages and flows. For informational purposes, water-year peak-flow data for stations on regulated streams in South Carolina also are provided in appendix D. Other information pertaining to the regulated streams is provided in the text of the report.
Mathematical foundations of hybrid data assimilation from a synchronization perspective
NASA Astrophysics Data System (ADS)
Penny, Stephen G.
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Mathematical foundations of hybrid data assimilation from a synchronization perspective.
Penny, Stephen G
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Impact of SST Anomaly Events over the Kuroshio-Oyashio Extension on the "Summer Prediction Barrier"
NASA Astrophysics Data System (ADS)
Wu, Yujie; Duan, Wansuo
2018-04-01
The "summer prediction barrier" (SPB) of SST anomalies (SSTA) over the Kuroshio-Oyashio Extension (KOE) refers to the phenomenon that prediction errors of KOE-SSTA tend to increase rapidly during boreal summer, resulting in large prediction uncertainties. The fast error growth associated with the SPB occurs in the mature-to-decaying transition phase, which is usually during the August-September-October (ASO) season, of the KOE-SSTA events to be predicted. Thus, the role of KOE-SSTA evolutionary characteristics in the transition phase in inducing the SPB is explored by performing perfect model predictability experiments in a coupled model, indicating that the SSTA events with larger mature-to-decaying transition rates (Category-1) favor a greater possibility of yielding a more significant SPB than those events with smaller transition rates (Category-2). The KOE-SSTA events in Category-1 tend to have more significant anomalous Ekman pumping in their transition phase, resulting in larger prediction errors of vertical oceanic temperature advection associated with the SSTA events. Consequently, Category-1 events possess faster error growth and larger prediction errors. In addition, the anomalous Ekman upwelling (downwelling) in the ASO season also causes SSTA cooling (warming), accelerating the transition rates of warm (cold) KOE-SSTA events. Therefore, the SSTA transition rate and error growth rate are both related with the anomalous Ekman pumping of the SSTA events to be predicted in their transition phase. This may explain why the SSTA events transferring more rapidly from the mature to decaying phase tend to have a greater possibility of yielding a more significant SPB.
Huang, Haoqian; Chen, Xiyuan; Zhang, Bo; Wang, Jian
2017-01-01
The underwater navigation system, mainly consisting of MEMS inertial sensors, is a key technology for the wide application of underwater gliders and plays an important role in achieving high accuracy navigation and positioning for a long time of period. However, the navigation errors will accumulate over time because of the inherent errors of inertial sensors, especially for MEMS grade IMU (Inertial Measurement Unit) generally used in gliders. The dead reckoning module is added to compensate the errors. In the complicated underwater environment, the performance of MEMS sensors is degraded sharply and the errors will become much larger. It is difficult to establish the accurate and fixed error model for the inertial sensor. Therefore, it is very hard to improve the accuracy of navigation information calculated by sensors. In order to solve the problem mentioned, the more suitable filter which integrates the multi-model method with an EKF approach can be designed according to different error models to give the optimal estimation for the state. The key parameters of error models can be used to determine the corresponding filter. The Adams explicit formula which has an advantage of high precision prediction is simultaneously fused into the above filter to achieve the much more improvement in attitudes estimation accuracy. The proposed algorithm has been proved through theory analyses and has been tested by both vehicle experiments and lake trials. Results show that the proposed method has better accuracy and effectiveness in terms of attitudes estimation compared with other methods mentioned in the paper for inertial navigation applied to underwater gliders. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Kalman Filtered MR Temperature Imaging for Laser Induced Thermal Therapies
Fuentes, D.; Yung, J.; Hazle, J. D.; Weinberg, J. S.; Stafford, R. J.
2013-01-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L2 (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, Δt < 10sec, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss Δt > 10sec. PMID:22203706
NASA Astrophysics Data System (ADS)
Concha Larrauri, P.
2015-12-01
Orange production in Florida has experienced a decline over the past decade. Hurricanes in 2004 and 2005 greatly affected production, almost to the same degree as strong freezes that occurred in the 1980's. The spread of the citrus greening disease after the hurricanes has also contributed to a reduction in orange production in Florida. The occurrence of hurricanes and diseases cannot easily be predicted but the additional effects of climate on orange yield can be studied and incorporated into existing production forecasts that are based on physical surveys, such as the October Citrus forecast issued every year by the USDA. Specific climate variables ocurring before and after the October forecast is issued can have impacts on flowering, orange drop rates, growth, and maturation, and can contribute to the forecast error. Here we present a methodology to incorporate local climate variables to predict the USDA's orange production forecast error, and we study the local effects of climate on yield in different counties in Florida. This information can aid farmers to gain an insight on what is to be expected during the orange production cycle, and can help supply chain managers to better plan their strategy.
NASA Astrophysics Data System (ADS)
Chevallier, Frédéric; Broquet, Grégoire; Pierangelo, Clémence; Crisp, David
2017-07-01
The column-average dry air-mole fraction of carbon dioxide in the atmosphere (XCO2) is measured by scattered satellite measurements like those from the Orbiting Carbon Observatory (OCO-2). We show that global continuous maps of XCO2 (corresponding to level 3 of the satellite data) at daily or coarser temporal resolution can be inferred from these data with a Kalman filter built on a model of persistence. Our application of this approach on 2 years of OCO-2 retrievals indicates that the filter provides better information than a climatology of XCO2 at both daily and monthly scales. Provided that the assigned observation uncertainty statistics are tuned in each grid cell of the XCO2 maps from an objective method (based on consistency diagnostics), the errors predicted by the filter at daily and monthly scales represent the true error statistics reasonably well, except for a bias in the high latitudes of the winter hemisphere and a lack of resolution (i.e., a too small discrimination skill) of the predicted error standard deviations. Due to the sparse satellite sampling, the broad-scale patterns of XCO2 described by the filter seem to lag behind the real signals by a few weeks. Finally, the filter offers interesting insights into the quality of the retrievals, both in terms of random and systematic errors.
Pilkington, Emma; Keidel, James; Kendrick, Luke T.; Saddy, James D.; Sage, Karen; Robson, Holly
2017-01-01
This study examined patterns of neologistic and perseverative errors during word repetition in fluent Jargon aphasia. The principal hypotheses accounting for Jargon production indicate that poor activation of a target stimulus leads to weakly activated target phoneme segments, which are outcompeted at the phonological encoding level. Voxel-lesion symptom mapping studies of word repetition errors suggest a breakdown in the translation from auditory-phonological analysis to motor activation. Behavioral analyses of repetition data were used to analyse the target relatedness (Phonological Overlap Index: POI) of neologistic errors and patterns of perseveration in 25 individuals with Jargon aphasia. Lesion-symptom analyses explored the relationship between neurological damage and jargon repetition in a group of 38 aphasia participants. Behavioral results showed that neologisms produced by 23 jargon individuals contained greater degrees of target lexico-phonological information than predicted by chance and that neologistic and perseverative production were closely associated. A significant relationship between jargon production and lesions to temporoparietal regions was identified. Region of interest regression analyses suggested that damage to the posterior superior temporal gyrus and superior temporal sulcus in combination was best predictive of a Jargon aphasia profile. Taken together, these results suggest that poor phonological encoding, secondary to impairment in sensory-motor integration, alongside impairments in self-monitoring result in jargon repetition. Insights for clinical management and future directions are discussed. PMID:28522967
From feedback- to response-based performance monitoring in active and observational learning.
Bellebaum, Christian; Colosio, Marco
2014-09-01
Humans can adapt their behavior by learning from the consequences of their own actions or by observing others. Gradual active learning of action-outcome contingencies is accompanied by a shift from feedback- to response-based performance monitoring. This shift is reflected by complementary learning-related changes of two ACC-driven ERP components, the feedback-related negativity (FRN) and the error-related negativity (ERN), which have both been suggested to signal events "worse than expected," that is, a negative prediction error. Although recent research has identified comparable components for observed behavior and outcomes (observational ERN and FRN), it is as yet unknown, whether these components are similarly modulated by prediction errors and thus also reflect behavioral adaptation. In this study, two groups of 15 participants learned action-outcome contingencies either actively or by observation. In active learners, FRN amplitude for negative feedback decreased and ERN amplitude in response to erroneous actions increased with learning, whereas observational ERN and FRN in observational learners did not exhibit learning-related changes. Learning performance, assessed in test trials without feedback, was comparable between groups, as was the ERN following actively performed errors during test trials. In summary, the results show that action-outcome associations can be learned similarly well actively and by observation. The mechanisms involved appear to differ, with the FRN in active learning reflecting the integration of information about own actions and the accompanying outcomes.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Earthquake Prediction in a Big Data World
NASA Astrophysics Data System (ADS)
Kossobokov, V. G.
2016-12-01
The digital revolution started just about 15 years ago has already surpassed the global information storage capacity of more than 5000 Exabytes (in optimally compressed bytes) per year. Open data in a Big Data World provides unprecedented opportunities for enhancing studies of the Earth System. However, it also opens wide avenues for deceptive associations in inter- and transdisciplinary data and for inflicted misleading predictions based on so-called "precursors". Earthquake prediction is not an easy task that implies a delicate application of statistics. So far, none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Regretfully, in many cases of seismic hazard assessment (SHA), from term-less to time-dependent (probabilistic PSHA or deterministic DSHA), and short-term earthquake forecasting (StEF), the claims of a high potential of the method are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers. Self-testing must be done in advance claiming prediction of hazardous areas and/or times. The necessity and possibility of applying simple tools of Earthquake Prediction Strategies, in particular, Error Diagram, introduced by G.M. Molchan in early 1990ies, and Seismic Roulette null-hypothesis as a metric of the alerted space, is evident. The set of errors, i.e. the rates of failure and of the alerted space-time volume, can be easily compared to random guessing, which comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a given cost-benefit function. These and other information obtained in such a simple testing may supply us with a realistic estimates of confidence and accuracy of SHA predictions and, if reliable but not necessarily perfect, with related recommendations on the level of risks for decision making in regard to engineering design, insurance, and emergency management. The examples of independent expertize of "seismic hazard maps", "precursors", and "forecast/prediction methods" are provided.
NASA Astrophysics Data System (ADS)
Botha, J. D. M.; Shahroki, A.; Rice, H.
2017-12-01
This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Flight Evaluation of Center-TRACON Automation System Trajectory Prediction Process
NASA Technical Reports Server (NTRS)
Williams, David H.; Green, Steven M.
1998-01-01
Two flight experiments (Phase 1 in October 1992 and Phase 2 in September 1994) were conducted to evaluate the accuracy of the Center-TRACON Automation System (CTAS) trajectory prediction process. The Transport Systems Research Vehicle (TSRV) Boeing 737 based at Langley Research Center flew 57 arrival trajectories that included cruise and descent segments; at the same time, descent clearance advisories from CTAS were followed. Actual trajectories of the airplane were compared with the trajectories predicted by the CTAS trajectory synthesis algorithms and airplane Flight Management System (FMS). Trajectory prediction accuracy was evaluated over several levels of cockpit automation that ranged from a conventional cockpit to performance-based FMS vertical navigation (VNAV). Error sources and their magnitudes were identified and measured from the flight data. The major source of error during these tests was found to be the predicted winds aloft used by CTAS. The most significant effect related to flight guidance was the cross-track and turn-overshoot errors associated with conventional VOR guidance. FMS lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and airplane performance model errors.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Surprise beyond prediction error
Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst
2014-01-01
Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.
2017-12-01
The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.
Flood loss model transfer: on the value of additional data
NASA Astrophysics Data System (ADS)
Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno
2017-04-01
The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.
Competition between learned reward and error outcome predictions in anterior cingulate cortex.
Alexander, William H; Brown, Joshua W
2010-02-15
The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.
Task relevance modulates the behavioural and neural effects of sensory predictions
Friston, Karl J.; Nobre, Anna C.
2017-01-01
The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225
49 CFR Appendix D to Part 222 - Determining Risk Levels
Code of Federal Regulations, 2011 CFR
2011-10-01
... prediction formulas can be used to derive the following for each crossing: 1. the predicted collisions (PC) 2... for errors such as data entry errors. The final output is the predicted number of collisions (PC). (e... collisions (PC). (f) For the prediction and severity index formulas, please see the following DOT...
An improved procedure for the validation of satellite-based precipitation estimates
NASA Astrophysics Data System (ADS)
Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad
2015-09-01
The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Kranz, R
2015-01-01
Objective: To establish the prevalence of red dot markers in a sample of wrist radiographs and to identify any anatomical and/or pathological characteristics that predict “incorrect” red dot classification. Methods: Accident and emergency (A&E) wrist cases from a digital imaging and communications in medicine/digital teaching library were examined for red dot prevalence and for the presence of several anatomical and pathological features. Binary logistic regression analyses were run to establish if any of these features were predictors of incorrect red dot classification. Results: 398 cases were analysed. Red dot was “incorrectly” classified in 8.5% of cases; 6.3% were “false negatives” (“FNs”)and 2.3% false positives (FPs) (one decimal place). Old fractures [odds ratio (OR), 5.070 (1.256–20.471)] and reported degenerative change [OR, 9.870 (2.300–42.359)] were found to predict FPs. Frykman V [OR, 9.500 (1.954–46.179)], Frykman VI [OR, 6.333 (1.205–33.283)] and non-Frykman positive abnormalities [OR, 4.597 (1.264–16.711)] predict “FNs”. Old fractures and Frykman VI were predictive of error at 90% confidence interval (CI); the rest at 95% CI. Conclusion: The five predictors of incorrect red dot classification may inform the image interpretation training of radiographers and other professionals to reduce diagnostic error. Verification with larger samples would reinforce these findings. Advances in knowledge: All healthcare providers strive to eradicate diagnostic error. By examining specific anatomical and pathological predictors on radiographs for such error, as well as extrinsic factors that may affect reporting accuracy, image interpretation training can focus on these “problem” areas and influence which radiographic abnormality detection schemes are appropriate to implement in A&E departments. PMID:25496373
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Prediction of transmission distortion for wireless video communication: analysis.
Chen, Zhifeng; Wu, Dapeng
2012-03-01
Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
Pharmacogenetic excitation of dorsomedial prefrontal cortex restores fear prediction error.
Yau, Joanna Oi-Yue; McNally, Gavan P
2015-01-07
Pavlovian conditioning involves encoding the predictive relationship between a conditioned stimulus (CS) and an unconditioned stimulus, so that synaptic plasticity and learning is instructed by prediction error. Here we used pharmacogenetic techniques to show a causal relation between activity of rat dorsomedial prefrontal cortex (dmPFC) neurons and fear prediction error. We expressed the excitatory hM3Dq designer receptor exclusively activated by a designer drug (DREADD) in dmPFC and isolated actions of prediction error by using an associative blocking design. Rats were trained to fear the visual CS (CSA) in stage I via pairings with footshock. Then in stage II, rats received compound presentations of visual CSA and auditory CS (CSB) with footshock. This prior fear conditioning of CSA reduced the prediction error during stage II to block fear learning to CSB. The group of rats that received AAV-hSYN-eYFP vector that was treated with clozapine-N-oxide (CNO; 3 mg/kg, i.p.) before stage II showed blocking when tested in the absence of CNO the next day. In contrast, the groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were treated with CNO before stage II training did not show blocking; learning toward CSB was restored. This restoration of prediction error and fear learning was specific to the injection of CNO because groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were injected with vehicle before stage II training did show blocking. These effects were not attributable to the DREADD manipulation enhancing learning or arousal, increasing fear memory strength or asymptotic levels of fear learning, or altering fear memory retrieval. Together, these results identify a causal role for dmPFC in a signature of adaptive behavior: using the past to predict future danger and learning from errors in these predictions. Copyright © 2015 the authors 0270-6474/15/350074-10$15.00/0.
Mallik, Saurav; Das, Smita; Kundu, Sudip
2016-01-01
Change in folding kinetics of globular proteins upon point mutation is crucial to a wide spectrum of biological research, such as protein misfolding, toxicity, and aggregations. Here we seek to address whether residue-level coevolutionary information of globular proteins can be informative to folding rate changes upon point mutations. Generating residue-level coevolutionary networks of globular proteins, we analyze three parameters: relative coevolution order (rCEO), network density (ND), and characteristic path length (CPL). A point mutation is considered to be equivalent to a node deletion of this network and respective percentage changes in rCEO, ND, CPL are found linearly correlated (0.84, 0.73, and -0.61, respectively) with experimental folding rate changes. The three parameters predict the folding rate change upon a point mutation with 0.031, 0.045, and 0.059 standard errors, respectively. © 2015 Wiley Periodicals, Inc.
Modeling habitat dynamics accounting for possible misclassification
Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.
2012-01-01
Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.
Kumar, Poornima; Eickhoff, Simon B.; Dombrovski, Alexandre Y.
2015-01-01
Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments – prediction error – is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies suggest that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that employed algorithmic reinforcement learning models, across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, while instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually-estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667
Reward positivity: Reward prediction error or salience prediction error?
Heydari, Sepideh; Holroyd, Clay B
2016-08-01
The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.
Assessment of precursory information in seismo-electromagnetic phenomena
NASA Astrophysics Data System (ADS)
Han, P.; Hattori, K.; Zhuang, J.
2017-12-01
Previous statistical studies showed that there were correlations between seismo-electromagnetic phenomena and sizeable earthquakes in Japan. In this study, utilizing Molchan's error diagram, we evaluate whether these phenomena contain precursory information and discuss how they can be used in short-term forecasting of large earthquake events. In practice, for given series of precursory signals and related earthquake events, each prediction strategy is characterized by the leading time of alarms, the length of alarm window, the alarm radius (area) and magnitude. The leading time is the time length between a detected anomaly and its following alarm, and the alarm window is the duration that an alarm lasts. The alarm radius and magnitude are maximum predictable distance and minimum predictable magnitude of earthquake events, respectively. We introduce the modified probability gain (PG') and the probability difference (D') to quantify the forecasting performance and to explore the optimal prediction parameters for a given electromagnetic observation. The above methodology is firstly applied to ULF magnetic data and GPS-TEC data. The results show that the earthquake predictions based on electromagnetic anomalies are significantly better than random guesses, indicating the data contain potential useful precursory information. Meanwhile, we reveal the optimal prediction parameters for both observations. The methodology proposed in this study could be also applied to other pre-earthquake phenomena to find out whether there is precursory information, and then on this base explore the optimal alarm parameters in practical short-term forecast.
NASA Astrophysics Data System (ADS)
De Felice, Matteo; Petitta, Marcello; Ruti, Paolo
2014-05-01
Photovoltaic diffusion is steadily growing on Europe, passing from a capacity of almost 14 GWp in 2011 to 21.5 GWp in 2012 [1]. Having accurate forecast is needed for planning and operational purposes, with the possibility to model and predict solar variability at different time-scales. This study examines the predictability of daily surface solar radiation comparing ECMWF operational forecasts with CM-SAF satellite measurements on the Meteosat (MSG) full disk domain. Operational forecasts used are the IFS system up to 10 days and the System4 seasonal forecast up to three months. Forecast are analysed considering average and variance of errors, showing error maps and average on specific domains with respect to prediction lead times. In all the cases, forecasts are compared with predictions obtained using persistence and state-of-art time-series models. We can observe a wide range of errors, with the performance of forecasts dramatically affected by orography and season. Lower errors are on southern Italy and Spain, with errors on some areas consistently under 10% up to ten days during summer (JJA). Finally, we conclude the study with some insight on how to "translate" the error on solar radiation to error on solar power production using available production data from solar power plants. [1] EurObserver, "Baromètre Photovoltaïque, Le journal des énergies renouvables, April 2012."
Cao, Hui; Stetson, Peter; Hripcsak, George
2003-01-01
Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.
Debiasing affective forecasting errors with targeted, but not representative, experience narratives.
Shaffer, Victoria A; Focella, Elizabeth S; Scherer, Laura D; Zikmund-Fisher, Brian J
2016-10-01
To determine whether representative experience narratives (describing a range of possible experiences) or targeted experience narratives (targeting the direction of forecasting bias) can reduce affective forecasting errors, or errors in predictions of experiences. In Study 1, participants (N=366) were surveyed about their experiences with 10 common medical events. Those who had never experienced the event provided ratings of predicted discomfort and those who had experienced the event provided ratings of actual discomfort. Participants making predictions were randomly assigned to either the representative experience narrative condition or the control condition in which they made predictions without reading narratives. In Study 2, participants (N=196) were again surveyed about their experiences with these 10 medical events, but participants making predictions were randomly assigned to either the targeted experience narrative condition or the control condition. Affective forecasting errors were observed in both studies. These forecasting errors were reduced with the use of targeted experience narratives (Study 2) but not representative experience narratives (Study 1). Targeted, but not representative, narratives improved the accuracy of predicted discomfort. Public collections of patient experiences should favor stories that target affective forecasting biases over stories representing the range of possible experiences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
Predictive accuracy of a ground-water model--Lessons from a postaudit
Konikow, Leonard F.
1986-01-01
Hydrogeologic studies commonly include the development, calibration, and application of a deterministic simulation model. To help assess the value of using such models to make predictions, a postaudit was conducted on a previously studied area in the Salt River and lower Santa Cruz River basins in central Arizona. A deterministic, distributed-parameter model of the ground-water system in these alluvial basins was calibrated by Anderson (1968) using about 40 years of data (1923–64). The calibrated model was then used to predict future water-level changes during the next 10 years (1965–74). Examination of actual water-level changes in 77 wells from 1965–74 indicates a poor correlation between observed and predicted water-level changes. The differences have a mean of 73 ft that is, predicted declines consistently exceeded those observed and a standard deviation of 47 ft. The bias in the predicted water-level change can be accounted for by the large error in the assumed total pumpage during the prediction period. However, the spatial distribution of errors in predicted water-level change does not correlate with the spatial distribution of errors in pumpage. Consequently, the lack of precision probably is not related only to errors in assumed pumpage, but may indicate the presence of other sources of error in the model, such as the two-dimensional representation of a three-dimensional problem or the lack of consideration of land-subsidence processes. This type of postaudit is a valuable method of verifying a model, and an evaluation of predictive errors can provide an increased understanding of the system and aid in assessing the value of undertaking development of a revised model.
Hill, Kaylin E; Samuel, Douglas B; Foti, Dan
2016-08-01
The error-related negativity (ERN) is a neural measure of error processing that has been implicated as a neurobehavioral trait and has transdiagnostic links with psychopathology. Few studies, however, have contextualized this traitlike component with regard to dimensions of personality that, as intermediate constructs, may aid in contextualizing links with psychopathology. Accordingly, the aim of this study was to examine the interrelationships between error monitoring and dimensions of personality within a large adult sample (N = 208). Building on previous research, we found that the ERN relates to a combination of negative affect, impulsivity, and conscientiousness. At low levels of conscientiousness, negative urgency (i.e., impulsivity in the context of negative affect) predicted an increased ERN; at high levels of conscientiousness, the effect of negative urgency was not significant. This relationship was driven specifically by the conscientiousness facets of competence, order, and deliberation. Links between personality measures and error positivity amplitude were weaker and nonsignificant. Post-error slowing was also related to conscientiousness, as well as a different facet of impulsivity: lack of perseverance. These findings suggest that, in the general population, error processing is modulated by the joint combination of negative affect, impulsivity, and conscientiousness (i.e., the profile across traits), perhaps more so than any one dimension alone. This work may inform future research concerning aberrant error processing in clinical populations. © 2016 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Mulej Bratec, Satja; Xie, Xiyao; Schmid, Gabriele; Doll, Anselm; Schilbach, Leonhard; Zimmer, Claus; Wohlschläger, Afra; Riedl, Valentin; Sorg, Christian
2015-12-01
Cognitive emotion regulation is a powerful way of modulating emotional responses. However, despite the vital role of emotions in learning, it is unknown whether the effect of cognitive emotion regulation also extends to the modulation of learning. Computational models indicate prediction error activity, typically observed in the striatum and ventral tegmental area, as a critical neural mechanism involved in associative learning. We used model-based fMRI during aversive conditioning with and without cognitive emotion regulation to test the hypothesis that emotion regulation would affect prediction error-related neural activity in the striatum and ventral tegmental area, reflecting an emotion regulation-related modulation of learning. Our results show that cognitive emotion regulation reduced emotion-related brain activity, but increased prediction error-related activity in a network involving ventral tegmental area, hippocampus, insula and ventral striatum. While the reduction of response activity was related to behavioral measures of emotion regulation success, the enhancement of prediction error-related neural activity was related to learning performance. Furthermore, functional connectivity between the ventral tegmental area and ventrolateral prefrontal cortex, an area involved in regulation, was specifically increased during emotion regulation and likewise related to learning performance. Our data, therefore, provide first-time evidence that beyond reducing emotional responses, cognitive emotion regulation affects learning by enhancing prediction error-related activity, potentially via tegmental dopaminergic pathways. Copyright © 2015 Elsevier Inc. All rights reserved.
Tropical forecasting - Predictability perspective
NASA Technical Reports Server (NTRS)
Shukla, J.
1989-01-01
Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.
Generalized Variance Function Applications in Forestry
James Alegria; Charles T. Scott; Charles T. Scott
1991-01-01
Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...
Set membership experimental design for biological systems.
Marvel, Skylar W; Williams, Cranos M
2012-03-21
Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.
Set membership experimental design for biological systems
2012-01-01
Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240
SEC proton prediction model: verification and analysis.
Balch, C C
1999-06-01
This paper describes a model that has been used at the NOAA Space Environment Center since the early 1970s as a guide for the prediction of solar energetic particle events. The algorithms for proton event probability, peak flux, and rise time are described. The predictions are compared with observations. The current model shows some ability to distinguish between proton event associated flares and flares that are not associated with proton events. The comparisons of predicted and observed peak flux show considerable scatter, with an rms error of almost an order of magnitude. Rise time comparisons also show scatter, with an rms error of approximately 28 h. The model algorithms are analyzed using historical data and improvements are suggested. Implementation of the algorithm modifications reduces the rms error in the log10 of the flux prediction by 21%, and the rise time rms error by 31%. Improvements are also realized in the probability prediction by deriving the conditional climatology for proton event occurrence given flare characteristics.
Predictability of the Arctic sea ice edge
NASA Astrophysics Data System (ADS)
Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.
2016-02-01
Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of -2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
The prediction of speech intelligibility in classrooms using computer models
NASA Astrophysics Data System (ADS)
Dance, Stephen; Dentoni, Roger
2005-04-01
Two classrooms were measured and modeled using the industry standard CATT model and the Web model CISM. Sound levels, reverberation times and speech intelligibility were predicted in these rooms using data for 7 octave bands. It was found that overall sound levels could be predicted to within 2 dB by both models. However, overall reverberation time was found to be accurately predicted by CATT 14% prediction error, but not by CISM, 41% prediction error. This compared to a 30% prediction error using classical theory. As for STI: CATT predicted within 11%, CISM to within 3% and Sabine to within 28% of the measured value. It should be noted that CISM took approximately 15 seconds to calculate, while CATT took 15 minutes. CISM is freely available on-line at www.whyverne.co.uk/acoustics/Pages/cism/cism.html
Analysis Resilient Algorithm on Artificial Neural Network Backpropagation
NASA Astrophysics Data System (ADS)
Saputra, Widodo; Tulus; Zarlis, Muhammad; Widia Sembiring, Rahmat; Hartama, Dedy
2017-12-01
Prediction required by decision makers to anticipate future planning. Artificial Neural Network (ANN) Backpropagation is one of method. This method however still has weakness, for long training time. This is a reason to improve a method to accelerate the training. One of Artificial Neural Network (ANN) Backpropagation method is a resilient method. Resilient method of changing weights and bias network with direct adaptation process of weighting based on local gradient information from every learning iteration. Predicting data result of Istanbul Stock Exchange training getting better. Mean Square Error (MSE) value is getting smaller and increasing accuracy.
A Study of the Groundwater Level Spatial Variability in the Messara Valley of Crete
NASA Astrophysics Data System (ADS)
Varouchakis, E. A.; Hristopulos, D. T.; Karatzas, G. P.
2009-04-01
The island of Crete (Greece) has a dry sub-humid climate and marginal groundwater resources, which are extensively used for agricultural activities and human consumption. The Messara valley is located in the south of the Heraklion prefecture, it covers an area of 398 km2, and it is the largest and most productive valley of the island. Over-exploitation during the past thirty (30) years has led to a dramatic decrease of thirty five (35) meters in the groundwater level. Possible future climatic changes in the Mediterranean region, potential desertification, population increase, and extensive agricultural activity generate concern over the sustainability of the water resources of the area. The accurate estimation of the water table depth is important for an integrated groundwater resource management plan. This study focuses on the Mires basin of the Messara valley for reasons of hydro-geological data availability and geological homogeneity. The research goal is to model and map the spatial variability of the basin's groundwater level accurately. The data used in this study consist of seventy (70) piezometric head measurements for the hydrological year 2001-2002. These are unevenly distributed and mostly concentrated along a temporary river that crosses the basin. The range of piezometric heads varies from an extreme low value of 9.4 meters above sea level (masl) to 62 masl, for the wet period of the year (October to April). An initial goal of the study is to develop spatial models for the accurate generation of static maps of groundwater level. At a second stage, these maps should extend the models to dynamic (space-time) situations for the prediction of future water levels. Preliminary data analysis shows that the piezometric head variations are not normally distributed. Several methods including Box-Cox transformation and a modified version of it, transgaussian Kriging, and Gaussian anamorphosis have been used to obtain a spatial model for the piezometric head. A trend model was constructed that accounted for the distance of the wells from the river bed. The spatial dependence of the fluctuations was studied by fitting isotropic and anisotropic empirical variograms with classical models, the Matérn model and the Spartan variogram family (Hristopulos, 2003; Hristopoulos and Elogne, 2007). The most accurate results, mean absolute prediction error of 4.57 masl, were obtained using the modified Box-Cox transform of the original data. The exponential and the isotropic Spartan variograms provided the best fits to the experimental variogram. Using Ordinary Kriging with either variogram function gave a mean absolute estimation error of 4.57 masl based on leave-one-out cross validation. The bias error of the predictions was calculated equal to -0.38 masl and the correlation coefficient of the predictions with respect of the original data equal to 0.8. The estimates located on the borders of the study domain presented a higher prediction error that varies from 8 to 14 masl due to the limited number of neighbor data. The maximum estimation error, observed at the extreme low value calculation, was 23 masl. The method of locally weighted regression (LWR), (NIST/SEMATECH 2009) was also investigated as an alternative approach for spatial modeling. The trend calculated from a second order LWR method showed a remarkable fit to the original data marked by a mean absolute estimation error of 4.4 masl. The bias prediction error was calculated equal to -0.16 masl and the correlation coefficient between predicted and original data equal to 0.88 masl. Higher estimation errors were found at the same locations and vary within the same range. The extreme low value calculation error has improved to 21 masl. Plans for future research include the incorporation of spatial anisotropy in the kriging algorithm, the investigation of kernel functions other than the tricube in LWR, as well as the use of locally adapted bandwidth values. Furthermore, pumping rates for fifty eight (58) of the seventy (70) wells are available display a correlation coefficient of -0.6 with the respective ground water levels. A Digital Elevation Model (DEM) of the area will provide additional information about the unsampled locations of the basin. The pumping rates and the DEM will be used as secondary information in a co-kriging approach, leading to more accurate estimation of the basin's water table. NIST/SEMATECH e-Handbook of Statitical Methods, http://www.itl.nist.gov/div898/handbook/, 12/01/09. D.T. Hristopulos, "Spartan Gibbs random field models for geostatistical applications," SIAM J. Scient. Comput., vol. 24, no. 6, pp. 2125-2162, 2003 D.T. Hristopulos and S. Elogne, "Analytic properties and covariance functions for a new class of generalized Gibbs random fields," IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 53, no 12, pp. 4667-4679, 2007
Maltreated children's memory: accuracy, suggestibility, and psychopathology.
Eisen, Mitchell L; Goodman, Gail S; Qin, Jianjian; Davis, Suzanne; Crayton, John
2007-11-01
Memory, suggestibility, stress arousal, and trauma-related psychopathology were examined in 328 3- to 16-year-olds involved in forensic investigations of abuse and neglect. Children's memory and suggestibility were assessed for a medical examination and venipuncture. Being older and scoring higher in cognitive functioning were related to fewer inaccuracies. In addition, cortisol level and trauma symptoms in children who reported more dissociative tendencies were associated with increased memory error, whereas cortisol level and trauma symptoms were not associated with increased error for children who reported fewer dissociative tendencies. Sexual and/or physical abuse predicted greater accuracy. The study contributes important new information to scientific understanding of maltreatment, psychopathology, and eyewitness memory in children. (c) 2007 APA.
Measurement of pH in whole blood by near-infrared spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen; Maynard, John D.; Robinson, M. Ries
1999-03-01
Whole blood pH has been determined {ital in vitro} by using near-infrared spectroscopy over the wavelength range of 1500 to 1785 nm with multivariate calibration modeling of the spectral data obtained from two different sample sets. In the first sample set, the pH of whole blood was varied without controlling cell size and oxygen saturation (O{sub 2} Sat) variation. The result was that the red blood cell (RBC) size and O{sub 2} Sat correlated with pH. Although the partial least-squares (PLS) multivariate calibration of these data produced a good pH prediction cross-validation standard error of prediction (CVSEP)=0.046, R{sup 2}=0.982, themore » spectral data were dominated by scattering changes due to changing RBC size that correlated with the pH changes. A second experiment was carried out where the RBC size and O{sub 2} Sat were varied orthogonally to the pH variation. A PLS calibration of the spectral data obtained from these samples produced a pH prediction with an R{sup 2} of 0.954 and a cross-validated standard error of prediction of 0.064 pH units. The robustness of the PLS calibration models was tested by predicting the data obtained from the other sets. The predicted pH values obtained from both data sets yielded R{sup 2} values greater than 0.9 once the data were corrected for differences in hemoglobin concentration. For example, with the use of the calibration produced from the second sample set, the pH values from the first sample set were predicted with an R{sup 2} of 0.92 after the predictions were corrected for bias and slope. It is shown that spectral information specific to pH-induced chemical changes in the hemoglobin molecule is contained within the PLS loading vectors developed for both the first and second data sets. It is this pH specific information that allows the spectra dominated by pH-correlated scattering changes to provide robust pH predictive ability in the uncorrelated data, and visa versa. {copyright} {ital 1999} {ital Society for Applied Spectroscopy}« less
The bias of the log power spectrum for discrete surveys
NASA Astrophysics Data System (ADS)
Repp, Andrew; Szapudi, István
2018-03-01
A primary goal of galaxy surveys is to tighten constraints on cosmological parameters, and the power spectrum P(k) is the standard means of doing so. However, at translinear scales P(k) is blind to much of these surveys' information - information which the log density power spectrum recovers. For discrete fields (such as the galaxy density), A* denotes the statistic analogous to the log density: A* is a `sufficient statistic' in that its power spectrum (and mean) capture virtually all of a discrete survey's information. However, the power spectrum of A* is biased with respect to the corresponding log spectrum for continuous fields, and to use P_{A^*}(k) to constrain the values of cosmological parameters, we require some means of predicting this bias. Here, we present a prescription for doing so; for Euclid-like surveys (with cubical cells 16h-1 Mpc across) our bias prescription's error is less than 3 per cent. This prediction will facilitate optimal utilization of the information in future galaxy surveys.
Automated body weight prediction of dairy cows using 3-dimensional vision.
Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S
2018-05-01
The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Nakling, Jakob; Buhaug, Harald; Backe, Bjorn
2005-10-01
In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.
Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories
NASA Technical Reports Server (NTRS)
Green, S.; Grace, M.; Williams, D.
1999-01-01
The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major source of error during these tests was found to be the predicted winds aloft used by CTAS. Position and velocity estimates of the airplane provided to CTAS by the ATC Host radar tracker were found to be a relatively insignificant error source for the trajectory conditions evaluated. Airplane performance modeling errors within CTAS were found to not significantly affect arrival time errors when the constrained descent procedures were used. The most significant effect related to the flight guidance was observed to be the cross-track and turn-overshoot errors associated with conventional VOR guidance. Lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and aircraft performance model errors.
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
Aircraft noise prediction program propeller analysis system IBM-PC version user's manual version 2.0
NASA Technical Reports Server (NTRS)
Nolan, Sandra K.
1988-01-01
The IBM-PC version of the Aircraft Noise Prediction Program (ANOPP) Propeller Analysis System (PAS) is a set of computational programs for predicting the aerodynamics, performance, and noise of propellers. The ANOPP-PAS is a subset of a larger version of ANOPP which can be executed on CDC or VAX computers. This manual provides a description of the IBM-PC version of the ANOPP-PAS and its prediction capabilities, and instructions on how to use the system on an IBM-XT or IBM-AT personal computer. Sections within the manual document installation, system design, ANOPP-PAS usage, data entry preprocessors, and ANOPP-PAS functional modules and procedures. Appendices to the manual include a glossary of ANOPP terms and information on error diagnostics and recovery techniques.
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
The Regionalization of National-Scale SPARROW Models for Stream Nutrients
Schwarz, G.E.; Alexander, R.B.; Smith, R.A.; Preston, S.D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ??100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.
1991-07-01
predicted by equation using actual chart response obtained from each calibration gas response. (Concentration of cal. gas,l Calibration error, % span • ppm...Analyzer predicted by cali- Col. gas Chart divisions equation* bration Cylinder conc., error,** Drift,***INo. ppm or % Pretest Posttest Pretest Posttest...2m ~J * Correlation coef. * qgq’jq **Analyzer ca.error, % spn (Cal. gas conc. conc. predicted ) x 1003 cal spanSpan value Acceptable limit x ɚ% of
Dopamine reward prediction-error signalling: a two-component response
Schultz, Wolfram
2017-01-01
Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction
NASA Astrophysics Data System (ADS)
Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.
2012-12-01
The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model
Li, Xiaoqing; Wang, Yu
2018-01-01
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.
Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu
2018-01-19
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.
Cullen, Kathleen E; Brooks, Jessica X
2015-02-01
During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain's motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased--consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new relationships would have important implications for understanding how responses to passive stimulation endure despite the cerebellum's ability to learn new relationships between motor commands and sensory feedback.
Naturalistic distraction and driving safety in older drivers
Aksan, Nazan; Dawson, Jeffrey D.; Emerson, Jamie L.; Yu, Lixi; Uc, Ergun Y.; Anderson, Steven W.; Rizzo, Matthew
2013-01-01
Objective This study aimed to quantify and compare performance of middle-aged and older drivers during a naturalistic distraction paradigm (visual search for roadside targets) and predict older driver performance given functioning in visual, motor, and cognitive domains. Background Distracted driving can imperil healthy adults and may disproportionally affect the safety of older drivers with visual, motor, and cognitive decline. Methods Two hundred and three drivers, 120 healthy older (61 men and 59 women, ages 65 years or greater) and 83 middle-aged drivers (38 men and 45 women, ages 40–64 years), participated in an on-road test in an instrumented vehicle. Outcome measures included performance in roadside target identification (traffic signs and restaurants) and concurrent driver safety. Differences in visual, motor, and cognitive functioning served as predictors. Results Older drivers identified fewer landmarks and drove slower but committed more safety errors than middle-aged drivers. Greater familiarity with local roads benefited performance of middle-aged but not older drivers. Visual cognition predicted both traffic sign identification and safety errors while executive function predicted traffic sign identification over and above vision. Conclusion Older adults are susceptible to driving safety errors while distracted by common secondary visual search tasks that are inherent to driving. The findings underscore that age-related cognitive decline affects older driver management of driving tasks at multiple levels, and can help inform the design of on-road tests and interventions for older drivers. PMID:23964422