Decision-making in schizophrenia: A predictive-coding perspective.
Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas
2018-05-31
Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.
Predicting intensity ranks of peptide fragment ions.
Frank, Ari M
2009-05-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.
Predicting Intensity Ranks of Peptide Fragment Ions
Frank, Ari M.
2009-01-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476
Mateen, Bilal Akhter; Bussas, Matthias; Doogan, Catherine; Waller, Denise; Saverino, Alessia; Király, Franz J; Playford, E Diane
2018-05-01
To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. Prospective cohort study. Tertiary neurological and neurosurgical center. In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity. This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test.
Peters, S A; Laham, S M; Pachter, N; Winship, I M
2014-04-01
When clinicians facilitate and patients make decisions about predictive genetic testing, they often base their choices on the predicted emotional consequences of positive and negative test results. Research from psychology and decision making suggests that such predictions may often be biased. Work on affective forecasting-predicting one's future emotional states-shows that people tend to overestimate the impact of (especially negative) emotional events on their well-being; a phenomenon termed the impact bias. In this article, we review the causes and consequences of the impact bias in medical decision making, with a focus on applying such findings to predictive testing in clinical genetics. We also recommend strategies for reducing the impact bias and consider the ethical and practical implications of doing so. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Category-based predictions: influence of uncertainty and feature associations.
Ross, B H; Murphy, G L
1996-05-01
Four experiments examined how people make inductive inferences using categories. Subjects read stories in which 2 categories were mentioned as possible identities of an object. The less likely category was varied to determine if people were using it, as well as the most likely category, in making predictions about the object. Experiment 1 showed that even when categorization uncertainty was emphasized, subjects used only 1 category as the basis for their prediction. Experiments 2-4 examined whether people would use multiple categories for making predictions when the feature to be predicted was associated to the less likely category. Multiple categories were used in this case, but only in limited circumstances; furthermore, using multiple categories in 1 prediction did not cause subjects to use them for subsequent predictions. The results increase the understanding of how categories are used in inductive inference.
Comparing predictions of extinction risk using models and subjective judgement
NASA Astrophysics Data System (ADS)
McCarthy, Michael A.; Keith, David; Tietjen, Justine; Burgman, Mark A.; Maunder, Mark; Master, Larry; Brook, Barry W.; Mace, Georgina; Possingham, Hugh P.; Medellin, Rodrigo; Andelman, Sandy; Regan, Helen; Regan, Tracey; Ruckelshaus, Mary
2004-10-01
Models of population dynamics are commonly used to predict risks in ecology, particularly risks of population decline. There is often considerable uncertainty associated with these predictions. However, alternatives to predictions based on population models have not been assessed. We used simulation models of hypothetical species to generate the kinds of data that might typically be available to ecologists and then invited other researchers to predict risks of population declines using these data. The accuracy of the predictions was assessed by comparison with the forecasts of the original model. The researchers used either population models or subjective judgement to make their predictions. Predictions made using models were only slightly more accurate than subjective judgements of risk. However, predictions using models tended to be unbiased, while subjective judgements were biased towards over-estimation. Psychology literature suggests that the bias of subjective judgements is likely to vary somewhat unpredictably among people, depending on their stake in the outcome. This will make subjective predictions more uncertain and less transparent than those based on models.
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model.
Reyna, Valerie F; Brainerd, Charles J
2011-09-01
From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals-that reasoning biases emerge with development -have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects-that risk preferences shift when the same decisions are phrases in terms of gains versus losses-emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making-prospect theory-can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes.
Neural Activity Reveals Preferences Without Choices
Smith, Alec; Bernheim, B. Douglas; Camerer, Colin
2014-01-01
We investigate the feasibility of inferring the choices people would make (if given the opportunity) based on their neural responses to the pertinent prospects when they are not engaged in actual decision making. The ability to make such inferences is of potential value when choice data are unavailable, or limited in ways that render standard methods of estimating choice mappings problematic. We formulate prediction models relating choices to “non-choice” neural responses and use them to predict out-of-sample choices for new items and for new groups of individuals. The predictions are sufficiently accurate to establish the feasibility of our approach. PMID:25729468
Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Zhang, Li-jie
2017-10-01
Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
Prostate Cancer Probability Prediction By Machine Learning Technique.
Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena
2017-11-26
The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.
Lin, Frank P Y; Pokorny, Adrian; Teng, Christina; Dear, Rachel; Epstein, Richard J
2016-12-01
Multidisciplinary team (MDT) meetings are used to optimise expert decision-making about treatment options, but such expertise is not digitally transferable between centres. To help standardise medical decision-making, we developed a machine learning model designed to predict MDT decisions about adjuvant breast cancer treatments. We analysed MDT decisions regarding adjuvant systemic therapy for 1065 breast cancer cases over eight years. Machine learning classifiers with and without bootstrap aggregation were correlated with MDT decisions (recommended, not recommended, or discussable) regarding adjuvant cytotoxic, endocrine and biologic/targeted therapies, then tested for predictability using stratified ten-fold cross-validations. The predictions so derived were duly compared with those based on published (ESMO and NCCN) cancer guidelines. Machine learning more accurately predicted adjuvant chemotherapy MDT decisions than did simple application of guidelines. No differences were found between MDT- vs. ESMO/NCCN- based decisions to prescribe either adjuvant endocrine (97%, p = 0.44/0.74) or biologic/targeted therapies (98%, p = 0.82/0.59). In contrast, significant discrepancies were evident between MDT- and guideline-based decisions to prescribe chemotherapy (87%, p < 0.01, representing 43% and 53% variations from ESMO/NCCN guidelines, respectively). Using ten-fold cross-validation, the best classifiers achieved areas under the receiver operating characteristic curve (AUC) of 0.940 for chemotherapy (95% C.I., 0.922-0.958), 0.899 for the endocrine therapy (95% C.I., 0.880-0.918), and 0.977 for trastuzumab therapy (95% C.I., 0.955-0.999) respectively. Overall, bootstrap aggregated classifiers performed better among all evaluated machine learning models. A machine learning approach based on clinicopathologic characteristics can predict MDT decisions about adjuvant breast cancer drug therapies. The discrepancy between MDT- and guideline-based decisions regarding adjuvant chemotherapy implies that certain non-clincopathologic criteria, such as patient preference and resource availability, are factored into clinical decision-making by local experts but not captured by guidelines.
DOT National Transportation Integrated Search
2009-08-01
Federal Aviation Administration (FAA) air traffic flow management (TFM) : decision-making is based primarily on a comparison of deterministic predictions of demand : and capacity at National Airspace System (NAS) elements such as airports, fixes and ...
Parallel constraint satisfaction in memory-based decisions.
Glöckner, Andreas; Hodges, Sara D
2011-01-01
Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.
Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation
Dayan, Peter; Berridge, Kent C.
2014-01-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659
Market mechanisms protect the vulnerable brain.
Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel; Denburg, Natalie L
2011-07-01
Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S. Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Market mechanisms protect the vulnerable brain
Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel
2011-01-01
Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. PMID:21600226
Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.
Dayan, Peter; Berridge, Kent C
2014-06-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.
Practical quantum mechanics-based fragment methods for predicting molecular crystal properties.
Wen, Shuhao; Nanda, Kaushik; Huang, Yuanhang; Beran, Gregory J O
2012-06-07
Significant advances in fragment-based electronic structure methods have created a real alternative to force-field and density functional techniques in condensed-phase problems such as molecular crystals. This perspective article highlights some of the important challenges in modeling molecular crystals and discusses techniques for addressing them. First, we survey recent developments in fragment-based methods for molecular crystals. Second, we use examples from our own recent research on a fragment-based QM/MM method, the hybrid many-body interaction (HMBI) model, to analyze the physical requirements for a practical and effective molecular crystal model chemistry. We demonstrate that it is possible to predict molecular crystal lattice energies to within a couple kJ mol(-1) and lattice parameters to within a few percent in small-molecule crystals. Fragment methods provide a systematically improvable approach to making predictions in the condensed phase, which is critical to making robust predictions regarding the subtle energy differences found in molecular crystals.
Optimal Predictions in Everyday Cognition: The Wisdom of Individuals or Crowds?
ERIC Educational Resources Information Center
Mozer, Michael C.; Pashler, Harold; Homaei, Hadjar
2008-01-01
Griffiths and Tenenbaum (2006) asked individuals to make predictions about the duration or extent of everyday events (e.g., cake baking times), and reported that predictions were optimal, employing Bayesian inference based on veridical prior distributions. Although the predictions conformed strikingly to statistics of the world, they reflect…
NASA Astrophysics Data System (ADS)
Yung, L. Y. Aaron; Somerville, Rachel S.
2017-06-01
The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
Category vs. Object Knowledge in Category-Based Induction
ERIC Educational Resources Information Center
Murphy, Gregory L.; Ross, Brian H.
2010-01-01
In one form of category-based induction, people make predictions about unknown properties of objects. There is a tension between predictions made based on the object's specific features (e.g., objects above a certain size tend not to fly) and those made by reference to category-level knowledge (e.g., birds fly). Seven experiments with artificial…
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2008-03-01
Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.
Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model
Reyna, Valerie F.; Brainerd, Charles J.
2011-01-01
From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals—that reasoning biases emerge with development —have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects—that risk preferences shift when the same decisions are phrases in terms of gains versus losses—emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making—prospect theory—can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes. PMID:22096268
Working memory capacity as controlled attention in tactical decision making.
Furley, Philip A; Memmert, Daniel
2012-06-01
The controlled attention theory of working memory capacity (WMC, Engle 2002) suggests that WMC represents a domain free limitation in the ability to control attention and is predictive of an individual's capability of staying focused, avoiding distraction and impulsive errors. In the present paper we test the predictive power of WMC in computer-based sport decision-making tasks. Experiment 1 demonstrated that high-WMC athletes were better able at focusing their attention on tactical decision making while blocking out irrelevant auditory distraction. Experiment 2 showed that high-WMC athletes were more successful at adapting their tactical decision making according to the situation instead of relying on prepotent inappropriate decisions. The present results provide additional but also unique support for the controlled attention theory of WMC by demonstrating that WMC is predictive of controlling attention in complex settings among different modalities and highlight the importance of working memory in tactical decision making.
A collaborative filtering-based approach to biomedical knowledge discovery.
Lever, Jake; Gakkhar, Sitanshu; Gottlieb, Michael; Rashnavadi, Tahereh; Lin, Santina; Siu, Celia; Smith, Maia; Jones, Martin R; Krzywinski, Martin; Jones, Steven J M; Wren, Jonathan
2018-02-15
The increase in publication rates makes it challenging for an individual researcher to stay abreast of all relevant research in order to find novel research hypotheses. Literature-based discovery methods make use of knowledge graphs built using text mining and can infer future associations between biomedical concepts that will likely occur in new publications. These predictions are a valuable resource for researchers to explore a research topic. Current methods for prediction are based on the local structure of the knowledge graph. A method that uses global knowledge from across the knowledge graph needs to be developed in order to make knowledge discovery a frequently used tool by researchers. We propose an approach based on the singular value decomposition (SVD) that is able to combine data from across the knowledge graph through a reduced representation. Using cooccurrence data extracted from published literature, we show that SVD performs better than the leading methods for scoring discoveries. We also show the diminishing predictive power of knowledge discovery as we compare our predictions with real associations that appear further into the future. Finally, we examine the strengths and weaknesses of the SVD approach against another well-performing system using several predicted associations. All code and results files for this analysis can be accessed at https://github.com/jakelever/knowledgediscovery. sjones@bcgsc.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Mullen, C. R.; Bender, R. L.; Bevill, R. L.; Reardon, J.; Hartley, L.
1972-01-01
A handbook containing a summary of model and flight test base heating data from the S-1, S-1B, S-4, S-1C, and S-2 stages is presented. A review of the available prediction methods is included. Experimental data are provided to make the handbook a single source of Saturn base heating data which can be used for preliminary base heating design predictions of launch vehicles.
Predicting U.S. food demand in the 20th century: a new look at system dynamics
NASA Astrophysics Data System (ADS)
Moorthy, Mukund; Cellier, Francois E.; LaFrance, Jeffrey T.
1998-08-01
The paper describes a new methodology for predicting the behavior of macroeconomic variables. The approach is based on System Dynamics and Fuzzy Inductive Reasoning. A four- layer pseudo-hierarchical model is proposed. The bottom layer makes predications about population dynamics, age distributions among the populace, as well as demographics. The second layer makes predications about the general state of the economy, including such variables as inflation and unemployment. The third layer makes predictions about the demand for certain goods or services, such as milk products, used cars, mobile telephones, or internet services. The fourth and top layer makes predictions about the supply of such goods and services, both in terms of their prices. Each layer can be influenced by control variables the values of which are only determined at higher levels. In this sense, the model is not strictly hierarchical. For example, the demand for goods at level three depends on the prices of these goods, which are only determined at level four. Yet, the prices are themselves influenced by the expected demand. The methodology is exemplified by means of a macroeconomic model that makes predictions about US food demand during the 20th century.
Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting; Guo, Feng-Biao
Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus , which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge.
A Drought Cyberinfrastructure System for Improving Water Resource Management and Policy Making
NASA Astrophysics Data System (ADS)
AghaKouchak, Amir
2015-04-01
Development of reliable monitoring and prediction indices and tools are fundamental to drought preparedness, management, and response decision making. This presentation provides an overview of the Global Integrated Drought Monitoring and Prediction System (GIDMaPS) which offers near real-time drought information using both remote sensing observations and model simulations. Designed as a cyberinfrastructure system, GIDMaPS provides drought information based on a wide range of model simulations and satellite observations from different space agencies. Numerous indices have been developed for drought monitoring based on various indicator variables (e.g., precipitation, soil moisture, water storage). Defining droughts based on a single variable (e.g., precipitation, soil moisture or runoff) may not be sufficient for reliable risk assessment and decision making. GIDMaPS provides drought information based on multiple indices including Standardized Precipitation Index (SPI), Standardized Soil Moisture Index (SSI) and the Multivariate Standardized Drought Index (MSDI) which combines SPI and SSI probabilistically. In other words, MSDI incorporates the meteorological and agricultural drought conditions for overall characterization of droughts, and better management and distribution of water resources among and across different users. The seasonal prediction component of GIDMaPS is based on a persistence model which requires historical data and near-past observations. The seasonal drought prediction component is designed to provide drought information for water resource management, and short-term decision making. In this presentation, both monitoring and prediction components of GIDMaPS will be discussed, and the results from several major droughts including the 2013 Namibia, 2012-2013 United States, 2011-2012 Horn of Africa, and 2010 Amazon Droughts will be presented. The presentation will highlight how this drought cyberinfrastructure system can be used to improve water resource management in California. Furthermore, the presentation provides an overview of the information farmers need for better decision making and how GIDMaPS can be used to improve decision making and reducing drought impacts. Further Reading Hao Z., AghaKouchak A., Nakhjiri N., Farahmand A., 2014, Global Integrated Drought Monitoring and Prediction System, Scientific Data, 1:140001, 1-10, doi: 10.1038/sdata.2014.1. Momtaz F., Nakhjiri N., AghaKouchak A., 2014, Toward a Drought Cyberinfrastructure System, Eos, Transactions American Geophysical Union, 95(22), 182-183, doi:10.1002/2014EO220002. AghaKouchak A., 2014, A Baseline Probabilistic Drought Forecasting Framework Using Standardized Soil Moisture Index: Application to the 2012 United States Drought, Hydrology and Earth System Sciences, 18, 2485-2492, doi: 10.5194/hess-18-2485-2014.
Gabbett, Tim J; Carius, Josh; Mulvey, Mike
2008-11-01
This study investigated the effects of video-based perceptual training on pattern recognition and pattern prediction ability in elite field sport athletes and determined whether enhanced perceptual skills influenced the physiological demands of game-based activities. Sixteen elite women soccer players (mean +/- SD age, 18.3 +/- 2.8 years) were allocated to either a video-based perceptual training group (N = 8) or a control group (N = 8). The video-based perceptual training group watched video footage of international women's soccer matches. Twelve training sessions, each 15 minutes in duration, were conducted during a 4-week period. Players performed assessments of speed (5-, 10-, and 20-m sprint), repeated-sprint ability (6 x 20-m sprints, with active recovery on a 15-second cycle), estimated maximal aerobic power (V O2 max, multistage fitness test), and a game-specific video-based perceptual test of pattern recognition and pattern prediction before and after the 4 weeks of video-based perceptual training. The on-field assessments included time-motion analysis completed on all players during a standardized 45-minute small-sided training game, and assessments of passing, shooting, and dribbling decision-making ability. No significant changes were detected in speed, repeated-sprint ability, or estimated V O2 max during the training period. However, video-based perceptual training improved decision accuracy and reduced the number of recall errors, indicating improved game awareness and decision-making ability. Importantly, the improvements in pattern recognition and prediction ability transferred to on-field improvements in passing, shooting, and dribbling decision-making skills. No differences were detected between groups for the time spent standing, walking, jogging, striding, and sprinting during the small-sided training game. These findings demonstrate that video-based perceptual training can be used effectively to enhance the decision-making ability of field sport athletes; however, it has no effect on the physiological demands of game-based activities.
Teeguarden, Justin G; Tan, Yu-Mei; Edwards, Stephen W; Leonard, Jeremy A; Anderson, Kim A; Corley, Richard A; Kile, Molly L; Simonich, Staci M; Stone, David; Tanguay, Robert L; Waters, Katrina M; Harper, Stacey L; Williams, David E
2016-05-03
Driven by major scientific advances in analytical methods, biomonitoring, computation, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition from a field of observation to a field of prediction. Deployment of an organizational and predictive framework for exposure science analogous to the "systems approaches" used in the biological sciences is a necessary step in this evolution. Here we propose the aggregate exposure pathway (AEP) concept as the natural and complementary companion in the exposure sciences to the adverse outcome pathway (AOP) concept in the toxicological sciences. Aggregate exposure pathways offer an intuitive framework to organize exposure data within individual units of prediction common to the field, setting the stage for exposure forecasting. Looking farther ahead, we envision direct linkages between aggregate exposure pathways and adverse outcome pathways, completing the source to outcome continuum for more meaningful integration of exposure assessment and hazard identification. Together, the two frameworks form and inform a decision-making framework with the flexibility for risk-based, hazard-based, or exposure-based decision making.
Differentiated Instruction in a Data-Based Decision-Making Context
ERIC Educational Resources Information Center
Faber, Janke M.; Glas, Cees A. W.; Visscher, Adrie J.
2018-01-01
In this study, the relationship between differentiated instruction, as an element of data-based decision making, and student achievement was examined. Classroom observations (n = 144) were used to measure teachers' differentiated instruction practices and to predict the mathematical achievement of 2nd- and 5th-grade students (n = 953). The…
Making Predictions in a Changing World: The Benefits of Individual-Based Ecology
Stillman, Richard A.; Railsback, Steven F.; Giske, Jarl; Berger, Uta; Grimm, Volker
2014-01-01
Ecologists urgently need a better ability to predict how environmental change affects biodiversity. We examine individual-based ecology (IBE), a research paradigm that promises better a predictive ability by using individual-based models (IBMs) to represent ecological dynamics as arising from how individuals interact with their environment and with each other. A key advantage of IBMs is that the basis for predictions—fitness maximization by individual organisms—is more general and reliable than the empirical relationships that other models depend on. Case studies illustrate the usefulness and predictive success of long-term IBE programs. The pioneering programs had three phases: conceptualization, implementation, and diversification. Continued validation of models runs throughout these phases. The breakthroughs that make IBE more productive include standards for describing and validating IBMs, improved and standardized theory for individual traits and behavior, software tools, and generalized instead of system-specific IBMs. We provide guidelines for pursuing IBE and a vision for future IBE research. PMID:26955076
A burnout prediction model based around char morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao Wu; Edward Lester; Michael Cloke
Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coalmore » particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.« less
Consumer Decision-Making Abilities and Long-Term Care Insurance Purchase.
McGarry, Brian E; Tempkin-Greener, Helena; Grabowski, David C; Chapman, Benjamin P; Li, Yue
2018-04-16
To determine the impact of consumer decision-making abilities on making a long-term care insurance (LTCi) purchasing decision that is consistent with normative economic predictions regarding policy ownership. Using data from the Health and Retirement Study, multivariate analyses are implemented to estimate the effect of decision-making ability factors on owning LTCi. Stratified multivariate analyses are used to examine the effect of decision-making abilities on the likelihood of adhering to economic predictions of LTCi ownership. In the full sample, better cognitive capacity was found to significantly increase the odds of ownership. When the sample was stratified based on expected LTCi ownership status, cognitive capacity was positively associated with ownership among those predicted to own and negatively associated with ownership among those predicted not to own who could likely afford a policy. Consumer decision-making abilities, specifically cognitive capacity, are an important determinant of LTCi decision outcomes. Deficits in this ability may prevent individuals from successfully preparing for future long-term care expenses. Policy makers should consider changes that reduce the cognitive burden of this choice, including the standardization of the LTCi market, the provision of consumer decision aids, and alternatives to voluntary and private insuring mechanisms.
NASA Astrophysics Data System (ADS)
Hao, Zengchao; Xia, Youlong; Luo, Lifeng; Singh, Vijay P.; Ouyang, Wei; Hao, Fanghua
2017-08-01
Disastrous impacts of recent drought events around the world have led to extensive efforts in drought monitoring and prediction. Various drought information systems have been developed with different indicators to provide early drought warning. The climate forecast from North American Multimodel Ensemble (NMME) has been among the most salient progress in climate prediction and its application for drought prediction has been considerably growing. Since its development in 1999, the U.S. Drought Monitor (USDM) has played a critical role in drought monitoring with different drought categories to characterize drought severity, which has been employed to aid decision making by a wealth of users such as natural resource managers and authorities. Due to wide applications of USDM, the development of drought prediction with USDM drought categories would greatly aid decision making. This study presented a categorical drought prediction system for predicting USDM drought categories in the U.S., based on the initial conditions from USDM and seasonal climate forecasts from NMME. Results of USDM drought categories predictions in the U.S. demonstrate the potential of the prediction system, which is expected to contribute to operational early drought warning in the U.S.
Prediction of Recidivism in Juvenile Offenders Based on Discriminant Analysis.
ERIC Educational Resources Information Center
Proefrock, David W.
The recent development of strong statistical techniques has made accurate predictions of recidivism possible. To investigate the utility of discriminant analysis methodology in making predictions of recidivism in juvenile offenders, the court records of 271 male and female juvenile offenders, aged 12-16, were reviewed. A cross validation group…
An analysis of a digital variant of the Trail Making Test using machine learning techniques.
Dahmen, Jessamyn; Cook, Diane; Fellows, Robert; Schmitter-Edgecombe, Maureen
2017-01-01
The goal of this work is to develop a digital version of a standard cognitive assessment, the Trail Making Test (TMT), and assess its utility. This paper introduces a novel digital version of the TMT and introduces a machine learning based approach to assess its capabilities. Using digital Trail Making Test (dTMT) data collected from (N = 54) older adult participants as feature sets, we use machine learning techniques to analyze the utility of the dTMT and evaluate the insights provided by the digital features. Predicted TMT scores correlate well with clinical digital test scores (r = 0.98) and paper time to completion scores (r = 0.65). Predicted TICS exhibited a small correlation with clinically derived TICS scores (r = 0.12 Part A, r = 0.10 Part B). Predicted FAB scores exhibited a small correlation with clinically derived FAB scores (r = 0.13 Part A, r = 0.29 for Part B). Digitally derived features were also used to predict diagnosis (AUC of 0.65). Our findings indicate that the dTMT is capable of measuring the same aspects of cognition as the paper-based TMT. Furthermore, the dTMT's additional data may be able to help monitor other cognitive processes not captured by the paper-based TMT alone.
Corbin, Jonathan C.; Reyna, Valerie F.; Weldon, Rebecca B.; Brainerd, Charles J.
2015-01-01
Fuzzy-trace theory distinguishes verbatim (literal, exact) from gist (meaningful) representations, predicting that reliance on gist increases with experience and expertise. Thus, many judgment-and-decision-making biases increase with development, such that cognition is colored by context in ways that violate logical coherence and probability theories. Nevertheless, this increase in gist-based intuition is adaptive: Gist is stable, less sensitive to interference, and easier to manipulate. Moreover, gist captures the functionally significant essence of information, supporting healthier and more robust decision processes. We describe how fuzzy-trace theory accounts for judgment-and-decision making phenomena, predicting the paradoxical arc of these processes with the development of experience and expertise. We present data linking gist memory processes to gist processing in decision making and provide illustrations of gist reliance in medicine, public health, and intelligence analysis. PMID:26664820
Corbin, Jonathan C; Reyna, Valerie F; Weldon, Rebecca B; Brainerd, Charles J
2015-12-01
Fuzzy-trace theory distinguishes verbatim (literal, exact) from gist (meaningful) representations, predicting that reliance on gist increases with experience and expertise. Thus, many judgment-and-decision-making biases increase with development, such that cognition is colored by context in ways that violate logical coherence and probability theories. Nevertheless, this increase in gist-based intuition is adaptive: Gist is stable, less sensitive to interference, and easier to manipulate. Moreover, gist captures the functionally significant essence of information, supporting healthier and more robust decision processes. We describe how fuzzy-trace theory accounts for judgment-and-decision making phenomena, predicting the paradoxical arc of these processes with the development of experience and expertise. We present data linking gist memory processes to gist processing in decision making and provide illustrations of gist reliance in medicine, public health, and intelligence analysis.
A new model to improve aggregate air traffic demand predictions
DOT National Transportation Integrated Search
2007-08-20
Federal Aviation Administration (FAA) air traffic flow management (TFM) : decision-making is based primarily on a comparison of predictions of traffic demand and : available capacity at various National Airspace System (NAS) elements such as airports...
Prediction using patient comparison vs. modeling: a case study for mortality prediction.
Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter
2016-08-01
Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.
Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa; Ravindran, Balaraman; Moustafa, Ahmed A.
2014-01-01
Although empirical and neural studies show that serotonin (5HT) plays many functional roles in the brain, prior computational models mostly focus on its role in behavioral inhibition. In this study, we present a model of risk based decision making in a modified Reinforcement Learning (RL)-framework. The model depicts the roles of dopamine (DA) and serotonin (5HT) in Basal Ganglia (BG). In this model, the DA signal is represented by the temporal difference error (δ), while the 5HT signal is represented by a parameter (α) that controls risk prediction error. This formulation that accommodates both 5HT and DA reconciles some of the diverse roles of 5HT particularly in connection with the BG system. We apply the model to different experimental paradigms used to study the role of 5HT: (1) Risk-sensitive decision making, where 5HT controls risk assessment, (2) Temporal reward prediction, where 5HT controls time-scale of reward prediction, and (3) Reward/Punishment sensitivity, in which the punishment prediction error depends on 5HT levels. Thus the proposed integrated RL model reconciles several existing theories of 5HT and DA in the BG. PMID:24795614
Short-term PV/T module temperature prediction based on PCA-RBF neural network
NASA Astrophysics Data System (ADS)
Li, Jiyong; Zhao, Zhendong; Li, Yisheng; Xiao, Jing; Tang, Yunfeng
2018-02-01
Aiming at the non-linearity and large inertia of temperature control in PV/T system, short-term temperature prediction of PV/T module is proposed, to make the PV/T system controller run forward according to the short-term forecasting situation to optimize control effect. Based on the analysis of the correlation between PV/T module temperature and meteorological factors, and the temperature of adjacent time series, the principal component analysis (PCA) method is used to pre-process the original input sample data. Combined with the RBF neural network theory, the simulation results show that the PCA method makes the prediction accuracy of the network model higher and the generalization performance stronger than that of the RBF neural network without the main component extraction.
Patient Similarity in Prediction Models Based on Health Data: A Scoping Review
Sharafoddini, Anis; Dubin, Joel A
2017-01-01
Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes. PMID:28258046
Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A
2018-05-01
Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.
Research on reverse logistics location under uncertainty environment based on grey prediction
NASA Astrophysics Data System (ADS)
Zhenqiang, Bao; Congwei, Zhu; Yuqin, Zhao; Quanke, Pan
This article constructs reverse logistic network based on uncertain environment, integrates the reverse logistics network and distribution network, and forms a closed network. An optimization model based on cost is established to help intermediate center, manufacturing center and remanufacturing center make location decision. A gray model GM (1, 1) is used to predict the product holdings of the collection points, and then prediction results are carried into the cost optimization model and a solution is got. Finally, an example is given to verify the effectiveness and feasibility of the model.
Depmann, Martine; Broer, Simone L; van der Schouw, Yvonne T; Tehrani, Fahimeh R; Eijkemans, Marinus J; Mol, Ben W; Broekmans, Frank J
2016-02-01
This review aimed to appraise data on prediction of age at natural menopause (ANM) based on antimüllerian hormone (AMH), antral follicle count (AFC), and mother's ANM to evaluate clinical usefulness and to identify directions for further research. We conducted three systematic reviews of the literature to identify studies of menopause prediction based on AMH, AFC, or mother's ANM, corrected for baseline age. Six studies selected in the search for AMH all consistently demonstrated AMH as being capable of predicting ANM (hazard ratio, 5.6-9.2). The sole study reporting on mother's ANM indicated that AMH was capable of predicting ANM (hazard ratio, 9.1-9.3). Two studies provided analyses of AFC and yielded conflicting results, making this marker less strong. AMH is currently the most promising marker for ANM prediction. The predictive capacity of mother's ANM demonstrated in a single study makes this marker a promising contributor to AMH for menopause prediction. Models, however, do not predict the extremes of menopause age very well and have wide prediction interval. These markers clearly need improvement before they can be used for individual prediction of menopause in the clinical setting. Moreover, potential limitations for such use include variations in AMH assays used and a lack of correction for factors or diseases affecting AMH levels or ANM. Future studies should include women of a broad age range (irrespective of cycle regularity) and should base predictions on repeated AMH measurements. Furthermore, currently unknown candidate predictors need to be identified.
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490
An adaptive handover prediction scheme for seamless mobility based wireless networks.
Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
Weighted hybrid technique for recommender system
NASA Astrophysics Data System (ADS)
Suriati, S.; Dwiastuti, Meisyarah; Tulus, T.
2017-12-01
Recommender system becomes very popular and has important role in an information system or webpages nowadays. A recommender system tries to make a prediction of which item a user may like based on his activity on the system. There are some familiar techniques to build a recommender system, such as content-based filtering and collaborative filtering. Content-based filtering does not involve opinions from human to make the prediction, while collaborative filtering does, so collaborative filtering can predict more accurately. However, collaborative filtering cannot give prediction to items which have never been rated by any user. In order to cover the drawbacks of each approach with the advantages of other approach, both approaches can be combined with an approach known as hybrid technique. Hybrid technique used in this work is weighted technique in which the prediction score is combination linear of scores gained by techniques that are combined.The purpose of this work is to show how an approach of weighted hybrid technique combining content-based filtering and item-based collaborative filtering can work in a movie recommender system and to show the performance comparison when both approachare combined and when each approach works alone. There are three experiments done in this work, combining both techniques with different parameters. The result shows that the weighted hybrid technique that is done in this work does not really boost the performance up, but it helps to give prediction score for unrated movies that are impossible to be recommended by only using collaborative filtering.
Teeguarden, Justin. G.; Tan, Yu-Mei; Edwards, Stephen W.; Leonard, Jeremy A.; Anderson, Kim A.; Corley, Richard A.; Harding, Anna K; Kile, Molly L.; Simonich, Staci M; Stone, David; Tanguay, Robert L.; Waters, Katrina M.; Harper, Stacey L.; Williams, David E.
2016-01-01
Synopsis Driven by major scientific advances in analytical methods, biomonitoring, computational tools, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition from a field of observation to a field of prediction. Deployment of an organizational and predictive framework for exposure science analogous to the “systems approaches” used in the biological sciences is a necessary step in this evolution. Here we propose the Aggregate Exposure Pathway (AEP) concept as the natural and complementary companion in the exposure sciences to the Adverse Outcome Pathway (AOP) concept in the toxicological sciences. Aggregate exposure pathways offer an intuitive framework to organize exposure data within individual units of prediction common to the field, setting the stage for exposure forecasting. Looking farther ahead, we envision direct linkages between aggregate exposure pathways and adverse outcome pathways, completing the source to outcome continuum for more efficient integration of exposure assessment and hazard identification. Together, the two pathways form and inform a decision-making framework with the flexibility for risk-based, hazard-based, or exposure-based decision making. PMID:26759916
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teeguarden, Justin G.; Tan, Yu -Mei; Edwards, Stephen W.
Here, driven by major scientific advances in analytical methods, biomonitoring, computation, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition from a field of observation to a field of prediction. Deployment of an organizational and predictive framework for exposure science analogous to the “systems approaches” used in the biological sciences is a necessary step in this evolution. Here we propose the aggregate exposure pathway (AEP) concept as the natural and complementary companion in the exposure sciences to the adverse outcome pathway (AOP) concept in the toxicological sciences.more » Aggregate exposure pathways offer an intuitive framework to organize exposure data within individual units of prediction common to the field, setting the stage for exposure forecasting. Looking farther ahead, we envision direct linkages between aggregate exposure pathways and adverse outcome pathways, completing the source to outcome continuum for more meaningful integration of exposure assessment and hazard identification. Together, the two frameworks form and inform a decision-making framework with the flexibility for risk-based, hazard-based, or exposure-based decision making.« less
Teeguarden, Justin G.; Tan, Yu -Mei; Edwards, Stephen W.; ...
2016-01-13
Here, driven by major scientific advances in analytical methods, biomonitoring, computation, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition from a field of observation to a field of prediction. Deployment of an organizational and predictive framework for exposure science analogous to the “systems approaches” used in the biological sciences is a necessary step in this evolution. Here we propose the aggregate exposure pathway (AEP) concept as the natural and complementary companion in the exposure sciences to the adverse outcome pathway (AOP) concept in the toxicological sciences.more » Aggregate exposure pathways offer an intuitive framework to organize exposure data within individual units of prediction common to the field, setting the stage for exposure forecasting. Looking farther ahead, we envision direct linkages between aggregate exposure pathways and adverse outcome pathways, completing the source to outcome continuum for more meaningful integration of exposure assessment and hazard identification. Together, the two frameworks form and inform a decision-making framework with the flexibility for risk-based, hazard-based, or exposure-based decision making.« less
An Autonomous Flight Safety System
2008-11-01
are taken. AFSS can take vehicle navigation data from redundant onboard sensors and make flight termination decisions using software-based rules...implemented on redundant flight processors. By basing these decisions on actual Instantaneous Impact Predictions and by providing for an arbitrary...number of mission rules, it is the contention of the AFSS development team that the decision making process used by Missile Flight Control Officers
Yude Pan; John Hom; Jennifer Jenkins; Richard Birdsey
2004-01-01
To assess what difference it might make to include spatially defined estimates of foliar nitrogen in the regional application of a forest ecosystem model (PnET-II), we composed model predictions of wood production from extensive ground-based forest inventory analysis data across the Mid-Atlantic region. Spatial variation in foliar N concentration was assigned based on...
An Analysis of a Digital Variant of the Trail Making Test Using Machine Learning Techniques
Dahmen, Jessamyn; Cook, Diane; Fellows, Robert; Schmitter-Edgecombe, Maureen
2017-01-01
BACKGROUND The goal of this work is to develop a digital version of a standard cognitive assessment, the Trail Making Test (TMT), and assess its utility. OBJECTIVE This paper introduces a novel digital version of the TMT and introduces a machine learning based approach to assess its capabilities. METHODS Using digital Trail Making Test (dTMT) data collected from (N=54) older adult participants as feature sets, we use machine learning techniques to analyze the utility of the dTMT and evaluate the insights provided by the digital features. RESULTS Predicted TMT scores correlate well with clinical digital test scores (r=0.98) and paper time to completion scores (r=0.65). Predicted TICS exhibited a small correlation with clinically-derived TICS scores (r=0.12 Part A, r=0.10 Part B). Predicted FAB scores exhibited a small correlation with clinically-derived FAB scores (r=0.13 Part A, r=0.29 for Part B). Digitally-derived features were also used to predict diagnosis (AUC of 0.65). CONCLUSION Our findings indicate that the dTMT is capable of measuring the same aspects of cognition as the paper-based TMT. Furthermore, the dTMT’s additional data may be able to help monitor other cognitive processes not captured by the paper-based TMT alone. PMID:27886019
Eppinger, Ben; Walter, Maik; Li, Shu-Chen
2017-04-01
In this study, we investigated the interplay of habitual (model-free) and goal-directed (model-based) decision processes by using a two-stage Markov decision task in combination with event-related potentials (ERPs) and computational modeling. To manipulate the demands on model-based decision making, we applied two experimental conditions with different probabilities of transitioning from the first to the second stage of the task. As we expected, when the stage transitions were more predictable, participants showed greater model-based (planning) behavior. Consistent with this result, we found that stimulus-evoked parietal (P300) activity at the second stage of the task increased with the predictability of the state transitions. However, the parietal activity also reflected model-free information about the expected values of the stimuli, indicating that at this stage of the task both types of information are integrated to guide decision making. Outcome-related ERP components only reflected reward-related processes: Specifically, a medial prefrontal ERP component (the feedback-related negativity) was sensitive to negative outcomes, whereas a component that is elicited by reward (the feedback-related positivity) increased as a function of positive prediction errors. Taken together, our data indicate that stimulus-locked parietal activity reflects the integration of model-based and model-free information during decision making, whereas feedback-related medial prefrontal signals primarily reflect reward-related decision processes.
Kathleen L. Kavanaugh; Matthew B. Dickinson; Anthony S. Bova
2010-01-01
Current operational methods for predicting tree mortality from fire injury are regression-based models that only indirectly consider underlying causes and, thus, have limited generality. A better understanding of the physiological consequences of tree heating and injury are needed to develop biophysical process models that can make predictions under changing or novel...
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
State-based versus reward-based motivation in younger and older adults.
Worthy, Darrell A; Cooper, Jessica A; Byrne, Kaileigh A; Gorlick, Marissa A; Maddox, W Todd
2014-12-01
Recent decision-making work has focused on a distinction between a habitual, model-free neural system that is motivated toward actions that lead directly to reward and a more computationally demanding goal-directed, model-based system that is motivated toward actions that improve one's future state. In this article, we examine how aging affects motivation toward reward-based versus state-based decision making. Participants performed tasks in which one type of option provided larger immediate rewards but the alternative type of option led to larger rewards on future trials, or improvements in state. We predicted that older adults would show a reduced preference for choices that led to improvements in state and a greater preference for choices that maximized immediate reward. We also predicted that fits from a hybrid reinforcement-learning model would indicate greater model-based strategy use in younger than in older adults. In line with these predictions, older adults selected the options that maximized reward more often than did younger adults in three of the four tasks, and modeling results suggested reduced model-based strategy use. In the task where older adults showed similar behavior to younger adults, our model-fitting results suggested that this was due to the utilization of a win-stay-lose-shift heuristic rather than a more complex model-based strategy. Additionally, within older adults, we found that model-based strategy use was positively correlated with memory measures from our neuropsychological test battery. We suggest that this shift from state-based to reward-based motivation may be due to age related declines in the neural structures needed for more computationally demanding model-based decision making.
Prediction of high-energy radiation belt electron fluxes using a combined VERB-NARMAX model
NASA Astrophysics Data System (ADS)
Pakhotin, I. P.; Balikhin, M. A.; Shprits, Y.; Subbotin, D.; Boynton, R.
2013-12-01
This study is concerned with the modelling and forecasting of energetic electron fluxes that endanger satellites in space. By combining data-driven predictions from the NARMAX methodology with the physics-based VERB code, it becomes possible to predict electron fluxes with a high level of accuracy and across a radial distance from inside the local acceleration region to out beyond geosynchronous orbit. The model coupling also makes is possible to avoid accounting for seed electron variations at the outer boundary. Conversely, combining a convection code with the VERB and NARMAX models has the potential to provide even greater accuracy in forecasting that is not limited to geostationary orbit but makes predictions across the entire outer radiation belt region.
Laser-Based Trespassing Prediction in Restrictive Environments: A Linear Approach
Cheein, Fernando Auat; Scaglia, Gustavo
2012-01-01
Stationary range laser sensors for intruder monitoring, restricted space violation detections and workspace determination are extensively used in risky environments. In this work we present a linear based approach for predicting the presence of moving agents before they trespass a laser-based restricted space. Our approach is based on the Taylor's series expansion of the detected objects' movements. The latter makes our proposal suitable for embedded applications. In the experimental results (carried out in different scenarios) presented herein, our proposal shows 100% of effectiveness in predicting trespassing situations. Several implementation results and statistics analysis showing the performance of our proposal are included in this work.
Prediction of main factors’ values of air transportation system safety based on system dynamics
NASA Astrophysics Data System (ADS)
Spiridonov, A. Yu; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikova, E. V.; Shulga, T. E.; Tverdokhlebov, V. A.; Kushnikov, O. V.; Fominykh, D. S.
2018-05-01
On the basis of the system-dynamic approach [1-8], a set of models has been developed that makes it possible to analyse and predict the values of the main safety indicators for the operation of aviation transport systems.
ERIC Educational Resources Information Center
Yang, Yang; Hu, Jun; Lv, Yingchun; Zhang, Mu
2013-01-01
As the tourism industry has gradually become the strategic mainstay industry of the national economy, the scope of the tourism discipline has developed rigorously. This paper makes a predictive study on the development of the scope of Guangdong provincial tourism discipline based on the artificial neural network BP model in order to find out how…
Predicting missing links in complex networks based on common neighbors and distance
Yang, Jinxuan; Zhang, Xiao-Dong
2016-01-01
The algorithms based on common neighbors metric to predict missing links in complex networks are very popular, but most of these algorithms do not account for missing links between nodes with no common neighbors. It is not accurate enough to reconstruct networks by using these methods in some cases especially when between nodes have less common neighbors. We proposed in this paper a new algorithm based on common neighbors and distance to improve accuracy of link prediction. Our proposed algorithm makes remarkable effect in predicting the missing links between nodes with no common neighbors and performs better than most existing currently used methods for a variety of real-world networks without increasing complexity. PMID:27905526
Ganga, G M D; Esposto, K F; Braatz, D
2012-01-01
The occupational exposure limits of different risk factors for development of low back disorders (LBDs) have not yet been established. One of the main problems in setting such guidelines is the limited understanding of how different risk factors for LBDs interact in causing injury, since the nature and mechanism of these disorders are relatively unknown phenomena. Industrial ergonomists' role becomes further complicated because the potential risk factors that may contribute towards the onset of LBDs interact in a complex manner, which makes it difficult to discriminate in detail among the jobs that place workers at high or low risk of LBDs. The purpose of this paper was to develop a comparative study between predictions based on the neural network-based model proposed by Zurada, Karwowski & Marras (1997) and a linear discriminant analysis model, for making predictions about industrial jobs according to their potential risk of low back disorders due to workplace design. The results obtained through applying the discriminant analysis-based model proved that it is as effective as the neural network-based model. Moreover, the discriminant analysis-based model proved to be more advantageous regarding cost and time savings for future data gathering.
Predictive sufficiency and the use of stored internal state
NASA Technical Reports Server (NTRS)
Musliner, David J.; Durfee, Edmund H.; Shin, Kang G.
1994-01-01
In all embedded computing systems, some delay exists between sensing and acting. By choosing an action based on sensed data, a system is essentially predicting that there will be no significant changes in the world during this delay. However, the dynamic and uncertain nature of the real world can make these predictions incorrect, and thus, a system may execute inappropriate actions. Making systems more reactive by decreasing the gap between sensing and action leaves less time for predictions to err, but still provides no principled assurance that they will be correct. Using the concept of predictive sufficiency described in this paper, a system can prove that its predictions are valid, and that it will never execute inappropriate actions. In the context of our CIRCA system, we also show how predictive sufficiency allows a system to guarantee worst-case response times to changes in its environment. Using predictive sufficiency, CIRCA is able to build real-time reactive control plans which provide a sound basis for performance guarantees that are unavailable with other reactive systems.
ESB-based Sensor Web integration for the prediction of electric power supply system vulnerability.
Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja
2013-08-15
Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application.
ESB-Based Sensor Web Integration for the Prediction of Electric Power Supply System Vulnerability
Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja
2013-01-01
Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application. PMID:23955435
ERIC Educational Resources Information Center
Bahadir, Elif
2016-01-01
The purpose of this study is to examine a neural network based approach to predict achievement in graduate education for Elementary Mathematics prospective teachers. With the help of this study, it can be possible to make an effective prediction regarding the students' achievement in graduate education with Artificial Neural Networks (ANN). Two…
An evaluation of NASA's program in human factors research: Aircrew-vehicle system interaction
NASA Technical Reports Server (NTRS)
1982-01-01
Research in human factors in the aircraft cockpit and a proposed program augmentation were reviewed. The dramatic growth of microprocessor technology makes it entirely feasible to automate increasingly more functions in the aircraft cockpit; the promise of improved vehicle performance, efficiency, and safety through automation makes highly automated flight inevitable. An organized data base and validated methodology for predicting the effects of automation on human performance and thus on safety are lacking and without such a data base and validated methodology for analyzing human performance, increased automation may introduce new risks. Efforts should be concentrated on developing methods and techniques for analyzing man machine interactions, including human workload and prediction of performance.
Yang, Xin-Hua; Huang, Jia; Zhu, Cui-Ying; Wang, Ye-Fei; Cheung, Eric F C; Chan, Raymond C K; Xie, Guang-Rong
2014-12-30
Anhedonia is a hallmark symptom of major depressive disorder (MDD). Preliminary findings suggest that anhedonia is characterized by reduced reward anticipation and motivation of obtaining reward. However, relatively little is known about reward-based decision-making in depression. We tested the hypothesis that anhedonia in MDD may reflect specific impairments in motivation on reward-based decision-making and the deficits might be associated with depressive symptoms severity. In study 1, individuals with and without depressive symptoms performed the modified version of the Effort Expenditure for Rewards Task (EEfRT), a behavioral measure of cost/benefit decision-making. In study 2, MDD patients, remitted MDD patients and healthy controls were recruited for the same procedures. We found evidence for decreased willingness to make effort for rewards among individuals with subsyndromal depression; the effect was amplified in MDD patients, but dissipated in patients with remitted depression. We also found that reduced anticipatory and consummatory pleasure predicted decreased willingness to expend efforts to obtain rewards in MDD patients. For individuals with subsyndromal depression, the impairments were correlated with anticipatory anhedonia but not consummatory anhedonia. These data offer novel evidence that motivational deficits in MDD are correlated with depression severity and predicted by self-reported anhedonia. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Nemes, Szilard; Rolfson, Ola; Garellick, Göran
2018-02-01
Clinicians considering improvements in health-related quality of life (HRQoL) after total hip replacement (THR) must account for multiple pieces of information. Evidence-based decisions are important to best assess the effect of THR on HRQoL. This work aims at constructing a shared decision-making tool that helps clinicians assessing the future benefits of THR by offering predictions of 1-year postoperative HRQoL of THR patients. We used data from the Swedish Hip Arthroplasty Register. Data from 2008 were used as training set and data from 2009 to 2012 as validation set. We adopted two approaches. First, we assumed a continuous distribution for the EQ-5D index and modelled the postoperative EQ-5D index with regression models. Second, we modelled the five dimensions of the EQ-5D and weighted together the predictions using the UK Time Trade-Off value set. As predictors, we used preoperative EQ-5D dimensions and the EQ-5D index, EQ visual analogue scale, visual analogue scale pain, Charnley classification, age, gender, body mass index, American Society of Anesthesiologists, surgical approach and prosthesis type. Additionally, the tested algorithms were combined in a single predictive tool by stacking. Best predictive power was obtained by the multivariate adaptive regression splines (R 2 = 0.158). However, this was not significantly better than the predictive power of linear regressions (R 2 = 0.157). The stacked model had a predictive power of 17%. Successful implementation of a shared decision-making tool that can aid clinicians and patients in understanding expected improvement in HRQoL following THR would require higher predictive power than we achieved. For a shared decision-making tool to succeed, further variables, such as socioeconomics, need to be considered. © 2016 John Wiley & Sons, Ltd.
Fechner, Hanna B; Pachur, Thorsten; Schooler, Lael J; Mehlhorn, Katja; Battal, Ceren; Volz, Kirsten G; Borst, Jelmer P
2016-12-01
How do people use memories to make inferences about real-world objects? We tested three strategies based on predicted patterns of response times and blood-oxygen-level-dependent (BOLD) responses: one strategy that relies solely on recognition memory, a second that retrieves additional knowledge, and a third, lexicographic (i.e., sequential) strategy, that considers knowledge conditionally on the evidence obtained from recognition memory. We implemented the strategies as computational models within the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture, which allowed us to derive behavioral and neural predictions that we then compared to the results of a functional magnetic resonance imaging (fMRI) study in which participants inferred which of two cities is larger. Overall, versions of the lexicographic strategy, according to which knowledge about many but not all alternatives is searched, provided the best account of the joint patterns of response times and BOLD responses. These results provide insights into the interplay between recognition and additional knowledge in memory, hinting at an adaptive use of these two sources of information in decision making. The results highlight the usefulness of implementing models of decision making within a cognitive architecture to derive predictions on the behavioral and neural level. Copyright © 2016 Elsevier B.V. All rights reserved.
Virtual Beach (VB) is a decision support tool that constructs site-specific statistical models to predict fecal indicator bacteria (FIB) at recreational beaches. Although primarily designed for making decisions regarding beach closures or issuance of swimming advisories based on...
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Binedell, J; Soldan, J R; Scourfield, J; Harper, P S
1996-01-01
Adolescents who are actively requesting Huntington's predictive testing of their own accord pose a dilemma to those providing testing. In the absence of empirical evidence as regards the impact of genetic testing on minors, current policy and guidelines, based on the ethical principles of non-maleficence and respect for individual autonomy and confidentiality, generally exclude the testing of minors. It is argued that adherence to an age based exclusion criterion in Huntington's disease predictive testing protocols is out of step with trends in UK case law concerning minors' consent to medical treatment. Furthermore, contributions from developmental psychology and research into adolescents' decision making competence suggest that adolescents can make informed choices about their health and personal lives. Criteria for developing an assessment approach to such requests are put forward and the implications of a case by case evaluation of competence to consent in terms of clinicians' tolerance for uncertainty are discussed. PMID:8950670
NASA Astrophysics Data System (ADS)
Asencio-Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez-Álvarez, F.
2018-06-01
Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predict earthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.
Forecasting Construction Cost Index based on visibility graph: A network approach
NASA Astrophysics Data System (ADS)
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
Maintaining homeostasis by decision-making.
Korn, Christoph W; Bach, Dominik R
2015-05-01
Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that--in both the foraging and the casino frames--participants' choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization.
Maintaining Homeostasis by Decision-Making
Korn, Christoph W.; Bach, Dominik R.
2015-01-01
Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that—in both the foraging and the casino frames—participants’ choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization. PMID:26024504
Heuristic-based information acquisition and decision making among pilots.
Wiggins, Mark W; Bollwerk, Sandra
2006-01-01
This research was designed to examine the impact of heuristic-based approaches to the acquisition of task-related information on the selection of an optimal alternative during simulated in-flight decision making. The work integrated features of naturalistic and normative decision making and strategies of information acquisition within a computer-based, decision support framework. The study comprised two phases, the first of which involved familiarizing pilots with three different heuristic-based strategies of information acquisition: frequency, elimination by aspects, and majority of confirming decisions. The second stage enabled participants to choose one of the three strategies of information acquisition to resolve a fourth (choice) scenario. The results indicated that task-oriented experience, rather than the information acquisition strategies, predicted the selection of the optimal alternative. It was also evident that of the three strategies available, the elimination by aspects information acquisition strategy was preferred by most participants. It was concluded that task-oriented experience, rather than the process of information acquisition, predicted task accuracy during the decision-making task. It was also concluded that pilots have a preference for one particular approach to information acquisition. Applications of outcomes of this research include the development of decision support systems that adapt to the information-processing capabilities and preferences of users.
ERIC Educational Resources Information Center
Hourigan, Mairéad; Leavy, Aisling
2016-01-01
As part of Japanese Lesson study research focusing on "comparing and describing likelihoods", fifth grade elementary students used real-world data in decision-making. Sporting statistics facilitated opportunities for informal inference, where data were used to make and justify predictions.
GOING DUTCH WHY THE DUTCH DO NOT SPEND 2% GDP ON DEFENSE
2017-04-06
considers four major changes in the external environment that influence coalition decision- making . Combined with the party programs and the Advocation...benefit their constituents, rather than make unfavorable decisions to heighten the Defense budget. Additionally, this paper will predict that based...explains the dynamics in coalition decision- making and has three major elements; the policy subsystem, advocacy coalitions that act within the
Sun, Xiyang; Miao, Jiacheng; Wang, You; Luo, Zhiyuan; Li, Guang
2017-01-01
An estimate on the reliability of prediction in the applications of electronic nose is essential, which has not been paid enough attention. An algorithm framework called conformal prediction is introduced in this work for discriminating different kinds of ginsengs with a home-made electronic nose instrument. Nonconformity measure based on k-nearest neighbors (KNN) is implemented separately as underlying algorithm of conformal prediction. In offline mode, the conformal predictor achieves a classification rate of 84.44% based on 1NN and 80.63% based on 3NN, which is better than that of simple KNN. In addition, it provides an estimate of reliability for each prediction. In online mode, the validity of predictions is guaranteed, which means that the error rate of region predictions never exceeds the significance level set by a user. The potential of this framework for detecting borderline examples and outliers in the application of E-nose is also investigated. The result shows that conformal prediction is a promising framework for the application of electronic nose to make predictions with reliability and validity. PMID:28805721
Cabrera, Daniel; Thomas, Jonathan F; Wiswell, Jeffrey L; Walston, James M; Anderson, Joel R; Hess, Erik P; Bellolio, M Fernanda
2015-09-01
Current cognitive sciences describe decision-making using the dual-process theory, where a System 1 is intuitive and a System 2 decision is hypothetico-deductive. We aim to compare the performance of these systems in determining patient acuity, disposition and diagnosis. Prospective observational study of emergency physicians assessing patients in the emergency department of an academic center. Physicians were provided the patient's chief complaint and vital signs and allowed to observe the patient briefly. They were then asked to predict acuity, final disposition (home, intensive care unit (ICU), non-ICU bed) and diagnosis. A patient was classified as sick by the investigators using previously published objective criteria. We obtained 662 observations from 289 patients. For acuity, the observers had a sensitivity of 73.9% (95% CI [67.7-79.5%]), specificity 83.3% (95% CI [79.5-86.7%]), positive predictive value 70.3% (95% CI [64.1-75.9%]) and negative predictive value 85.7% (95% CI [82.0-88.9%]). For final disposition, the observers made a correct prediction in 80.8% (95% CI [76.1-85.0%]) of the cases. For ICU admission, emergency physicians had a sensitivity of 33.9% (95% CI [22.1-47.4%]) and a specificity of 96.9% (95% CI [94.0-98.7%]). The correct diagnosis was made 54% of the time with the limited data available. System 1 decision-making based on limited information had a sensitivity close to 80% for acuity and disposition prediction, but the performance was lower for predicting ICU admission and diagnosis. System 1 decision-making appears insufficient for final decisions in these domains but likely provides a cognitive framework for System 2 decision-making.
Predicting explorative motor learning using decision-making and motor noise.
Chen, Xiuli; Mohr, Kieran; Galea, Joseph M
2017-04-01
A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant's level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning.
Predicting explorative motor learning using decision-making and motor noise
Galea, Joseph M.
2017-01-01
A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant’s level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning. PMID:28437451
Evidence for an Explanation Advantage in Naive Biological Reasoning
ERIC Educational Resources Information Center
Legare, Cristine H.; Wellman, Henry M.; Gelman, Susan A.
2009-01-01
The present studies compare young children's explanations and predictions for the biological phenomenon of contamination. In Study 1, 36 preschoolers and 24 adults heard vignettes concerning contamination, and were asked either to make a prediction or to provide an explanation. Even 3-year-olds readily supplied contamination-based explanations,…
Virtual Beach (VB) is a decision support tool that constructs site-specific statistical models to predict fecal indicator bacteria (FIB) at locations of exposure. Although primarily designed for making decisions regarding beach closures or issuance of swimming advisories based on...
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
The complex contribution of sociodemographics to decision-making power in gay male couples
Perry, Nicholas S.; Huebner, David M.; Baucom, Brian R. W.; Hoff, Colleen C.
2016-01-01
Relationship power is an important dyadic construct in close relationships that is associated with relationship health and partner’s individual health. Understanding what predicts power in heterosexual couples has proven difficult, and even less is known about gay couples. Resource models of power posit that demographic characteristics associated with social status (e.g., age, income) confer power within the relationship, which in turn shapes relationship outcomes. We tested this model in a sample of gay male couples (N=566 couples), and extended it by examining race and HIV status. Multilevel modeling was used to test associations between demographic bases of power and decision-making power. We also examined relative associations among demographic bases and decision-making power with relationship satisfaction, given the literature on power imbalances and overall relationship functioning. Results showed that individual income was positively associated with decision-making power, as was participant’s HIV status, with HIV-positive men reporting greater power. Age differences within the relationship interacted with relationship length to predict decision-making power, but not satisfaction. HIV-concordant positive couples were less satisfied than concordant negative couples. Higher power partners were less satisfied than lower power partners. Demographic factors contributing to decision-making power among same-sex male couples appear to share some similarities with heterosexual couples (e.g., income is associated with power), as well as have unique features (e.g., HIV status influences power). However, these same demographics did not reliably predict relationship satisfaction in the manner that existing power theories suggest. Findings indicate important considerations for theories of power among same-sex male couples. PMID:27606937
The syntactic complexity of Russian relative clauses
Fedorenko, Evelina; Gibson, Edward
2012-01-01
Although syntactic complexity has been investigated across dozens of studies, the available data still greatly underdetermine relevant theories of processing difficulty. Memory-based and expectation-based theories make opposite predictions regarding fine-grained time course of processing difficulty in syntactically constrained contexts, and each class of theory receives support from results on some constructions in some languages. Here we report four self-paced reading experiments on the online comprehension of Russian relative clauses together with related corpus studies, taking advantage of Russian’s flexible word order to disentangle predictions of competing theories. We find support for key predictions of memory-based theories in reading times at RC verbs, and for key predictions of expectation-based theories in processing difficulty at RC-initial accusative noun phrase (NP) objects, which corpus data suggest should be highly unexpected. These results suggest that a complete theory of syntactic complexity must integrate insights from both expectation-based and memory-based theories. PMID:24711687
Overlapping Risky Decision-Making and Olfactory Processing Ability in HIV-Infected Individuals.
Jackson, Christopher; Rai, Narayan; McLean, Charlee K; Hipolito, Maria Mananita S; Hamilton, Flora Terrell; Kapetanovic, Suad; Nwulia, Evaristus A
2017-09-01
Given neuroimaging evidences of overlap in the circuitries for decision-making and olfactory processing, we examined the hypothesis that impairment in psychophysical tasks of olfaction would independently predict poor performances on Iowa Gambling Task (IGT), a laboratory task that closely mimics real-life decision-making, in a US cohort of HIV-infected (HIV+) individuals. IGT and psychophysical tasks of olfaction were administered to a Washington DC-based cohort of largely African American HIV+ subjects (N=100), and to a small number of demographically-matched non-HIV healthy controls (N=43) from a different study. Constructs of olfactory ability and decision-making were examined through confirmatory factor analysis (CFA). Structural equation models (SEMs) were used to evaluate the validity of the path relationship between these two constructs. The 100 HIV+ participants (56% female; 96% African Americans; median age = 48 years) had median CD4 count of 576 cells/μl and median HIV RNA viral load <48 copies per milliliter. Majority of HIV+ participants performed randomly throughout the course of IGT tasks, and failed to demonstrate a learning curve. Confirmatory factor analysis provided support for a unidimensional factor underlying poor performances on IGT. Nomological validity for correlations between olfactory ability and IGT performance was confirmed through SEM. Finally, factor scores of olfactory ability and IGT performance strongly predicted 6 months history of drug use, while olfaction additionally predicted hallucinogen use. This study suggests that combination of simple, office-based tasks of olfaction and decision-making may identify those HIV+ individuals who are more prone to risky decision-making. This finding may have significant clinical, public health value if joint impairments in olfaction and IGT task correlates with more decreased activity in brain regions relevant to decision-making.
Analysis of Phoenix Anomalies and IV and V Findings Applied to the GRAIL Mission
NASA Technical Reports Server (NTRS)
Larson, Steve
2012-01-01
Analysis of patterns in IV&V findings and their correlation with post-launch anomalies allowed GRAIL to make more efficient use of IV&V services . Fewer issues. . Higher fix rate. . Better communication. . Increased volume of potential issues vetted, at lower cost. . Hard to make predictions of post-launch performance based on IV&V findings . Phoenix made sound fix/use as-is decisions . Things that were fixed eliminated some problems, but hard to quantify. . Broad predictive success in one area, but inverse relationship in others.
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-05-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Risk prediction model: Statistical and artificial neural network approach
NASA Astrophysics Data System (ADS)
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
NASA Astrophysics Data System (ADS)
Hardinata, Lingga; Warsito, Budi; Suparti
2018-05-01
Complexity of bankruptcy causes the accurate models of bankruptcy prediction difficult to be achieved. Various prediction models have been developed to improve the accuracy of bankruptcy predictions. Machine learning has been widely used to predict because of its adaptive capabilities. Artificial Neural Networks (ANN) is one of machine learning which proved able to complete inference tasks such as prediction and classification especially in data mining. In this paper, we propose the implementation of Jordan Recurrent Neural Networks (JRNN) to classify and predict corporate bankruptcy based on financial ratios. Feedback interconnection in JRNN enable to make the network keep important information well allowing the network to work more effectively. The result analysis showed that JRNN works very well in bankruptcy prediction with average success rate of 81.3785%.
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Andersson, Jesper L.R.; Sotiropoulos, Stamatios N.
2015-01-01
Diffusion MRI offers great potential in studying the human brain microstructure and connectivity. However, diffusion images are marred by technical problems, such as image distortions and spurious signal loss. Correcting for these problems is non-trivial and relies on having a mechanism that predicts what to expect. In this paper we describe a novel way to represent and make predictions about diffusion MRI data. It is based on a Gaussian process on one or several spheres similar to the Geostatistical method of “Kriging”. We present a choice of covariance function that allows us to accurately predict the signal even from voxels with complex fibre patterns. For multi-shell data (multiple non-zero b-values) the covariance function extends across the shells which means that data from one shell is used when making predictions for another shell. PMID:26236030
QSAR Classification Model for Antibacterial Compounds and Its Use in Virtual Screening
2012-09-26
test set molecules that were not used to train the models . This allowed us to more accurately estimate the prediction power of the models . As...pathogens and deposited in PubChem Bioassays. Ultimately, the main purpose of this model is to make predictions , based on known antibacterial and non...the model built form the remaining compounds is used to predict the left out compound. Once all the compounds pass through this cycle of prediction , a
Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary
2013-01-16
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.
Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary
2013-01-01
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347
Making predictions skill level analysis
NASA Astrophysics Data System (ADS)
Katarína, Krišková; Marián, Kireš
2017-01-01
The current trend in the education is focused on skills that are cross-subject and have a great importance for the pupil future life. Pupils should acquire different types of skills during their education to be prepared for future careers and life in the 21st century. Physics as a subject offers many opportunities for pupils' skills development. One of the skills that are expected to be developed in physics and also in other sciences is making predictions. The prediction, in the meaning of the argument about what may happen in the future, is an integral part of the empirical cognition, in which students confront existing knowledge and experience with new, hitherto unknown and surprising phenomena. The extent of the skill is the formulation of hypotheses, which is required in the upper secondary physics education. In the contribution, the prediction skill is specified and its eventual levels are classified. Authors focus on the tools for skill level determination based on the analysis of pupils` worksheets. Worksheets are the part of the educational activities conducted within the Inquiry Science Laboratory Steelpark. Based on the formulation of pupils' prediction the pupils thinking can be seen and their understanding of the topic, as well as preconceptions and misconceptions.
Trabanino, Rene J.; Hall, Spencer E.; Vaidehi, Nagarajan; Floriano, Wely B.; Kam, Victor W. T.; Goddard, William A.
2004-01-01
G-protein-coupled receptors (GPCRs) are involved in cell communication processes and with mediating such senses as vision, smell, taste, and pain. They constitute a prominent superfamily of drug targets, but an atomic-level structure is available for only one GPCR, bovine rhodopsin, making it difficult to use structure-based methods to design receptor-specific drugs. We have developed the MembStruk first principles computational method for predicting the three-dimensional structure of GPCRs. In this article we validate the MembStruk procedure by comparing its predictions with the high-resolution crystal structure of bovine rhodopsin. The crystal structure of bovine rhodopsin has the second extracellular (EC-II) loop closed over the transmembrane regions by making a disulfide linkage between Cys-110 and Cys-187, but we speculate that opening this loop may play a role in the activation process of the receptor through the cysteine linkage with helix 3. Consequently we predicted two structures for bovine rhodopsin from the primary sequence (with no input from the crystal structure)—one with the EC-II loop closed as in the crystal structure, and the other with the EC-II loop open. The MembStruk-predicted structure of bovine rhodopsin with the closed EC-II loop deviates from the crystal by 2.84 Å coordinate root mean-square (CRMS) in the transmembrane region main-chain atoms. The predicted three-dimensional structures for other GPCRs can be validated only by predicting binding sites and energies for various ligands. For such predictions we developed the HierDock first principles computational method. We validate HierDock by predicting the binding site of 11-cis-retinal in the crystal structure of bovine rhodopsin. Scanning the whole protein without using any prior knowledge of the binding site, we find that the best scoring conformation in rhodopsin is 1.1 Å CRMS from the crystal structure for the ligand atoms. This predicted conformation has the carbonyl O only 2.82 Å from the N of Lys-296. Making this Schiff base bond and minimizing leads to a final conformation only 0.62 Å CRMS from the crystal structure. We also used HierDock to predict the binding site of 11-cis-retinal in the MembStruk-predicted structure of bovine rhodopsin (closed loop). Scanning the whole protein structure leads to a structure in which the carbonyl O is only 2.85 Å from the N of Lys-296. Making this Schiff base bond and minimizing leads to a final conformation only 2.92 Å CRMS from the crystal structure. The good agreement of the ab initio-predicted protein structures and ligand binding site with experiment validates the use of the MembStruk and HierDock first principles' methods. Since these methods are generic and applicable to any GPCR, they should be useful in predicting the structures of other GPCRs and the binding site of ligands to these proteins. PMID:15041637
Vucicevic, Darko; Honoris, Lily; Raia, Federica
2018-01-01
Heart failure (HF) is a complex clinical syndrome that results from structural or functional cardiovascular disorders causing a mismatch between demand and supply of oxygenated blood and consecutive failure of the body’s organs. For those patients with stage D HF, advanced therapies, such as mechanical circulatory support (MCS) or heart transplantation (HTx), are potentially life-saving options. The role of risk stratification of patients with stage D HF in a value-based healthcare framework is to predict which subset might benefit from advanced HF (AdHF) therapies, to improve outcomes related to the individual patient including mortality, morbidity and patient experience as well as to optimize health care delivery system outcomes such as cost-effectiveness. Risk stratification and subsequent outcome prediction as well as therapeutic recommendation-making need to be based on the comparative survival benefit rationale. A robust model needs to (I) have the power to discriminate (i.e., to correctly risk stratify patients); (II) calibrate (i.e., to show agreement between the predicted and observed risk); (III) to be applicable to the general population; and (IV) provide good external validation. The Seattle Heart Failure Model (SHFM) and the Heart Failure Survival Score (HFSS) are two of the most widely utilized scores. However, outcomes for patients with HF are highly variable which make clinical predictions challenging. Despite our clinical expertise and current prediction tools, the best short- and long-term survival for the individual patient, particularly the sickest patient, is not easy to identify because among the most severely ill, elderly and frail patients, most preoperative prediction tools have the tendency to be imprecise in estimating risk. They should be used as a guide in a clinical encounter grounded in a culture of shared decision-making, with the expert healthcare professional team as consultants and the patient as an empowered decision-maker in a trustful safe therapeutic relationship. PMID:29492383
Vucicevic, Darko; Honoris, Lily; Raia, Federica; Deng, Mario
2018-01-01
Heart failure (HF) is a complex clinical syndrome that results from structural or functional cardiovascular disorders causing a mismatch between demand and supply of oxygenated blood and consecutive failure of the body's organs. For those patients with stage D HF, advanced therapies, such as mechanical circulatory support (MCS) or heart transplantation (HTx), are potentially life-saving options. The role of risk stratification of patients with stage D HF in a value-based healthcare framework is to predict which subset might benefit from advanced HF (AdHF) therapies, to improve outcomes related to the individual patient including mortality, morbidity and patient experience as well as to optimize health care delivery system outcomes such as cost-effectiveness. Risk stratification and subsequent outcome prediction as well as therapeutic recommendation-making need to be based on the comparative survival benefit rationale. A robust model needs to (I) have the power to discriminate (i.e., to correctly risk stratify patients); (II) calibrate (i.e., to show agreement between the predicted and observed risk); (III) to be applicable to the general population; and (IV) provide good external validation. The Seattle Heart Failure Model (SHFM) and the Heart Failure Survival Score (HFSS) are two of the most widely utilized scores. However, outcomes for patients with HF are highly variable which make clinical predictions challenging. Despite our clinical expertise and current prediction tools, the best short- and long-term survival for the individual patient, particularly the sickest patient, is not easy to identify because among the most severely ill, elderly and frail patients, most preoperative prediction tools have the tendency to be imprecise in estimating risk. They should be used as a guide in a clinical encounter grounded in a culture of shared decision-making, with the expert healthcare professional team as consultants and the patient as an empowered decision-maker in a trustful safe therapeutic relationship.
Model-based influences on humans’ choices and striatal prediction errors
Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.
2011-01-01
Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563
Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research
ERIC Educational Resources Information Center
He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne
2018-01-01
In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…
Naïve Bayes classification in R.
Zhang, Zhongheng
2016-06-01
Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.
Intentional strategies that make co-actors more predictable: the case of signaling.
Pezzulo, Giovanni; Dindo, Haris
2013-08-01
Pickering & Garrod (P&G) explain dialogue dynamics in terms of forward modeling and prediction-by-simulation mechanisms. Their theory dissolves a strict segregation between production and comprehension processes, and it links dialogue to action-based theories of joint action. We propose that the theory can also incorporate intentional strategies that increase communicative success: for example, signaling strategies that help remaining predictable and forming common ground.
Making detailed predictions makes (some) predictions worse
NASA Astrophysics Data System (ADS)
Kelly, Theresa F.
In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.
El Hage Chehade, Hiba; Wazir, Umar; Mokbel, Kinan; Kasem, Abdul; Mokbel, Kefah
2018-01-01
Decision-making regarding adjuvant chemotherapy has been based on clinical and pathological features. However, such decisions are seldom consistent. Web-based predictive models have been developed using data from cancer registries to help determine the need for adjuvant therapy. More recently, with the recognition of the heterogenous nature of breast cancer, genomic assays have been developed to aid in the therapeutic decision-making. We have carried out a comprehensive literature review regarding online prognostication tools and genomic assays to assess whether online tools could be used as valid alternatives to genomic profiling in decision-making regarding adjuvant therapy in early breast cancer. Breast cancer has been recently recognized as a heterogenous disease based on variations in molecular characteristics. Online tools are valuable in guiding adjuvant treatment, especially in resource constrained countries. However, in the era of personalized therapy, molecular profiling appears to be superior in predicting clinical outcome and guiding therapy. Copyright © 2017 Elsevier Inc. All rights reserved.
ATLAS trigger operations: Upgrades to ``Xmon'' rate prediction system
NASA Astrophysics Data System (ADS)
Myers, Ava; Aukerman, Andrew; Hong, Tae Min; Atlas Collaboration
2017-01-01
We present ``Xmon,'' a tool to monitor trigger rates in the Control Room of the ATLAS Experiment. We discuss Xmon's recent (1) updates, (2) upgrades, and (3) operations. (1) Xmon was updated to modify the tool written for the three-level trigger architecture in Run-1 (2009-2012) to adapt to the new two-level system for Run-2 (2015-current). The tool takes as input the beam luminosity to make a rate prediction, which is compared with incoming rates to detect anomalies that occur both globally throughout a run and locally within a run. Global offsets are more commonly caught by the predictions based upon past runs, where offline processing allows for function adjustments and fit quality through outlier rejection. (2) Xmon was upgraded to detect local offsets using on-the-fly predictions, which uses a sliding window of in-run rates to make predictions. (3) Xmon operations examples are given. Future work involves further automation of the steps to provide the predictive functions and for alerting shifters.
Prediction of ECS and SSC Models for Flux-Limited Samples of Gamma-Ray Blazars
NASA Technical Reports Server (NTRS)
Lister, Matthew L.; Marscher, Alan P.
1999-01-01
The external Compton scattering (ECS) and synchrotron self-Compton (SSC) models make distinct predictions for the amount of Doppler boosting of high-energy gamma-rays emitted by Nazar. We examine how these differences affect the predicted properties of active galactic nucleus (AGN) samples selected on the basis of Murray emission. We create simulated flux-limited samples based on the ECS and SSC models, and compare their properties to those of identified EGRET blazars. We find that for small gamma-ray-selected samples, the two models make very similar predictions, and cannot be reliably distinguished. This is primarily due to the fact that not only the Doppler factor, but also the cosmological distance and intrinsic luminosity play a role in determining whether an AGN is included in a flux-limited gamma-ray sample.
Scaffolding across the lifespan in history-dependent decision-making.
Cooper, Jessica A; Worthy, Darrell A; Gorlick, Marissa A; Maddox, W Todd
2013-06-01
We examined the relationship between pressure and age-related changes in decision-making using a task for which currently available rewards depend on the participant's previous history of choices. Optimal responding in this task requires the participant to learn how his or her current choices affect changes in the future rewards given for each option. Building on the scaffolding theory of aging and cognition, we predicted that when additional frontal resources are available, compensatory recruitment leads to increased monitoring and increased use of heuristic-based strategies, ultimately leading to better performance. Specifically, we predicted that scaffolding would result in an age-related performance advantage under no pressure conditions. We also predicted that, although younger adults would engage in scaffolding under pressure, older adults would not have additional resources available for increased scaffolding under pressure-packed conditions, leading to an age-related performance deficit. Both predictions were supported by the data. In addition, computational models were used to evaluate decision-making strategies employed by each participant group. As expected, older adults under no pressure conditions and younger adults under pressure conditions showed increased use of heuristic-based strategies relative to older adults under pressure and younger adults under no pressure, respectively. These results are consistent with the notion that scaffolding can occur across the life span in the face of an environmental challenge. PsycINFO Database Record (c) 2013 APA, all rights reserved.
The salt marsh vegetation spread dynamics simulation and prediction based on conditions optimized CA
NASA Astrophysics Data System (ADS)
Guan, Yujuan; Zhang, Liquan
2006-10-01
The biodiversity conservation and management of the salt marsh vegetation relies on processing their spatial information. Nowadays, more attentions are focused on their classification surveying and describing qualitatively dynamics based on RS images interpreted, rather than on simulating and predicting their dynamics quantitatively, which is of greater importance for managing and planning the salt marsh vegetation. In this paper, our notion is to make a dynamic model on large-scale and to provide a virtual laboratory in which researchers can run it according requirements. Firstly, the characteristic of the cellular automata was analyzed and a conclusion indicated that it was necessary for a CA model to be extended geographically under varying conditions of space-time circumstance in order to make results matched the facts accurately. Based on the conventional cellular automata model, the author introduced several new conditions to optimize it for simulating the vegetation objectively, such as elevation, growth speed, invading ability, variation and inheriting and so on. Hence the CA cells and remote sensing image pixels, cell neighbors and pixel neighbors, cell rules and nature of the plants were unified respectively. Taking JiuDuanSha as the test site, where holds mainly Phragmites australis (P.australis) community, Scirpus mariqueter (S.mariqueter) community and Spartina alterniflora (S.alterniflora) community. The paper explored the process of making simulation and predictions about these salt marsh vegetable changing with the conditions optimized CA (COCA) model, and examined the links among data, statistical models, and ecological predictions. This study exploited the potential of applying Conditioned Optimized CA model technique to solve this problem.
Word of Mouth : An Agent-based Approach to Predictability of Stock Prices
NASA Astrophysics Data System (ADS)
Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko
This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.
NASA Technical Reports Server (NTRS)
Mercer, Joey S.; Bienert, Nancy; Gomez, Ashley; Hunt, Sarah; Kraut, Joshua; Martin, Lynne; Morey, Susan; Green, Steven M.; Prevot, Thomas; Wu, Minghong G.
2013-01-01
A Human-In-The-Loop air traffic control simulation investigated the impact of uncertainties in trajectory predictions on NextGen Trajectory-Based Operations concepts, seeking to understand when the automation would become unacceptable to controllers or when performance targets could no longer be met. Retired air traffic controllers staffed two en route transition sectors, delivering arrival traffic to the northwest corner-post of Atlanta approach control under time-based metering operations. Using trajectory-based decision-support tools, the participants worked the traffic under varying levels of wind forecast error and aircraft performance model error, impacting the ground automations ability to make accurate predictions. Results suggest that the controllers were able to maintain high levels of performance, despite even the highest levels of trajectory prediction errors.
Spam comments prediction using stacking with ensemble learning
NASA Astrophysics Data System (ADS)
Mehmood, Arif; On, Byung-Won; Lee, Ingyu; Ashraf, Imran; Choi, Gyu Sang
2018-01-01
Illusive comments of product or services are misleading for people in decision making. The current methodologies to predict deceptive comments are concerned for feature designing with single training model. Indigenous features have ability to show some linguistic phenomena but are hard to reveal the latent semantic meaning of the comments. We propose a prediction model on general features of documents using stacking with ensemble learning. Term Frequency/Inverse Document Frequency (TF/IDF) features are inputs to stacking of Random Forest and Gradient Boosted Trees and the outputs of the base learners are encapsulated with decision tree to make final training of the model. The results exhibits that our approach gives the accuracy of 92.19% which outperform the state-of-the-art method.
Neurobiological and memory models of risky decision making in adolescents versus young adults.
Reyna, Valerie F; Estrada, Steven M; DeMarinis, Jessica A; Myers, Regina M; Stanisz, Janine M; Mills, Britain A
2011-09-01
Predictions of fuzzy-trace theory and neurobiological approaches are examined regarding risk taking in a classic decision-making task--the framing task--as well as in the context of real-life risk taking. We report the 1st study of framing effects in adolescents versus adults, varying risk and reward, and relate choices to individual differences, sexual behavior, and behavioral intentions. As predicted by fuzzy-trace theory, adolescents modulated risk taking according to risk and reward. Adults showed standard framing, reflecting greater emphasis on gist-based (qualitative) reasoning, but adolescents displayed reverse framing when potential gains for risk taking were high, reflecting greater emphasis on verbatim-based (quantitative) reasoning. Reverse framing signals a different way of thinking compared with standard framing (reverse framing also differs from simply choosing the risky option). Measures of verbatim- and gist-based reasoning about risk, sensation seeking, behavioral activation, and inhibition were used to extract dimensions of risk proneness: Sensation seeking increased and then decreased, whereas inhibition increased from early adolescence to young adulthood, predicted by neurobiological theories. Two additional dimensions, verbatim- and gist-based reasoning about risk, loaded separately and predicted unique variance in risk taking. Importantly, framing responses predicted real-life risk taking. Reasoning was the most consistent predictor of real-life risk taking: (a) Intentions to have sex, sexual behavior, and number of partners decreased when gist-based reasoning was triggered by retrieval cues in questions about perceived risk, whereas (b) intentions to have sex and number of partners increased when verbatim-based reasoning was triggered by different retrieval cues in questions about perceived risk. (c) 2011 APA, all rights reserved.
Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.
Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O
2017-08-01
To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.
NASA Technical Reports Server (NTRS)
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
The complex contribution of sociodemographics to decision-making power in gay male couples.
Perry, Nicholas S; Huebner, David M; Baucom, Brian R W; Hoff, Colleen C
2016-12-01
Relationship power is an important dyadic construct in close relationships that is associated with relationship health and partner's individual health. Understanding what predicts power in heterosexual couples has proven difficult, and even less is known about gay couples. Resource models of power posit that demographic characteristics associated with social status (e.g., age, income) confer power within the relationship, which in turn shapes relationship outcomes. We tested this model in a sample of gay male couples (N = 566 couples) and extended it by examining race and HIV status. Multilevel modeling was used to test associations between demographic bases of power and decision-making power. We also examined relative associations among demographic bases and decision-making power with relationship satisfaction given the literature on power imbalances and overall relationship functioning. Results showed that individual income was positively associated with decision-making power, as was participant's HIV status, with HIV-positive men reporting greater power. Age differences within the relationship interacted with relationship length to predict decision-making power, but not satisfaction. HIV-concordant positive couples were less satisfied than concordant negative couples. Higher power partners were less satisfied than lower power partners. Demographic factors contributing to decision-making power among same-sex male couples appear to share some similarities with heterosexual couples (e.g., income is associated with power) and have unique features (e.g., HIV status influences power). However, these same demographics did not reliably predict relationship satisfaction in the manner that existing power theories suggest. Findings indicate important considerations for theories of power among same-sex male couples. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Network of listed companies based on common shareholders and the prediction of market volatility
NASA Astrophysics Data System (ADS)
Li, Jie; Ren, Da; Feng, Xu; Zhang, Yongjie
2016-11-01
In this paper, we build a network of listed companies in the Chinese stock market based on common shareholding data from 2003 to 2013. We analyze the evolution of topological characteristics of the network (e.g., average degree, diameter, average path length and clustering coefficient) with respect to the time sequence. Additionally, we consider the economic implications of topological characteristic changes on market volatility and use them to make future predictions. Our study finds that the network diameter significantly predicts volatility. After adding control variables used in traditional financial studies (volume, turnover and previous volatility), network topology still significantly influences volatility and improves the predictive ability of the model.
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
Sturm, Marc; Quinten, Sascha; Huber, Christian G.; Kohlbacher, Oliver
2007-01-01
We propose a new model for predicting the retention time of oligonucleotides. The model is based on ν support vector regression using features derived from base sequence and predicted secondary structure of oligonucleotides. Because of the secondary structure information, the model is applicable even at relatively low temperatures where the secondary structure is not suppressed by thermal denaturing. This makes the prediction of oligonucleotide retention time for arbitrary temperatures possible, provided that the target temperature lies within the temperature range of the training data. We describe different possibilities of feature calculation from base sequence and secondary structure, present the results and compare our model to existing models. PMID:17567619
Using gaze patterns to predict task intent in collaboration.
Huang, Chien-Ming; Andrist, Sean; Sauppé, Allison; Mutlu, Bilge
2015-01-01
In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a "worker" prepared a sandwich by adding ingredients requested by a "customer." In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based) model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 s before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience.
ERIC Educational Resources Information Center
Hilbig, Benjamin E.; Pohl, Rudiger F.
2009-01-01
According to part of the adaptive toolbox notion of decision making known as the recognition heuristic (RH), the decision process in comparative judgments--and its duration--is determined by whether recognition discriminates between objects. By contrast, some recently proposed alternative models predict that choices largely depend on the amount of…
USDA-ARS?s Scientific Manuscript database
Forecasting peak standing crop (PSC) for the coming grazing season can help ranchers make appropriate stocking decisions to reduce enterprise risks. Previously developed PSC predictors were based on short-term experimental data (<15 yr) and limited stocking rates (SR) without including the effect of...
USDA-ARS?s Scientific Manuscript database
Crop yield estimates have a strong impact on dealing with food shortages and on market demand and supply; these estimates are critical for decision-making processes by the U.S. Government, policy makers, stakeholders, etc. Most of the decision making is based on forecasts provided by the U.S. Depart...
Facilitating Adoption of News Tool to Develop Clinical Decision Making
ERIC Educational Resources Information Center
Brown, Robin T.
2017-01-01
This scholarly project was a non-experimental, pre/post-test design to (a) facilitate the voluntary adoption of the National Early Warning Score (NEWS), and (b) develop clinical decision making (CDM) in one cohort of junior level nursing students participating in a simulation lab. NEWS is an evidence-based predictive scoring tool developed by the…
NASA Astrophysics Data System (ADS)
Pandremmenou, K.; Shahid, M.; Kondi, L. P.; Lövström, B.
2015-03-01
In this work, we propose a No-Reference (NR) bitstream-based model for predicting the quality of H.264/AVC video sequences, affected by both compression artifacts and transmission impairments. The proposed model is based on a feature extraction procedure, where a large number of features are calculated from the packet-loss impaired bitstream. Many of the features are firstly proposed in this work, and the specific set of the features as a whole is applied for the first time for making NR video quality predictions. All feature observations are taken as input to the Least Absolute Shrinkage and Selection Operator (LASSO) regression method. LASSO indicates the most important features, and using only them, it is possible to estimate the Mean Opinion Score (MOS) with high accuracy. Indicatively, we point out that only 13 features are able to produce a Pearson Correlation Coefficient of 0.92 with the MOS. Interestingly, the performance statistics we computed in order to assess our method for predicting the Structural Similarity Index and the Video Quality Metric are equally good. Thus, the obtained experimental results verified the suitability of the features selected by LASSO as well as the ability of LASSO in making accurate predictions through sparse modeling.
Lossless Compression of Data into Fixed-Length Packets
NASA Technical Reports Server (NTRS)
Kiely, Aaron B.; Klimesh, Matthew A.
2009-01-01
A computer program effects lossless compression of data samples from a one-dimensional source into fixed-length data packets. The software makes use of adaptive prediction: it exploits the data structure in such a way as to increase the efficiency of compression beyond that otherwise achievable. Adaptive linear filtering is used to predict each sample value based on past sample values. The difference between predicted and actual sample values is encoded using a Golomb code.
A Bayesian network model for predicting type 2 diabetes risk based on electronic health records
NASA Astrophysics Data System (ADS)
Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen
2017-07-01
An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.
Modelling Complexity: Making Sense of Leadership Issues in 14-19 Education
ERIC Educational Resources Information Center
Briggs, Ann R. J.
2008-01-01
Modelling of statistical data is a well established analytical strategy. Statistical data can be modelled to represent, and thereby predict, the forces acting upon a structure or system. For the rapidly changing systems in the world of education, modelling enables the researcher to understand, to predict and to enable decisions to be based upon…
Schönberg, Tom; Daw, Nathaniel D; Joel, Daphna; O'Doherty, John P
2007-11-21
The computational framework of reinforcement learning has been used to forward our understanding of the neural mechanisms underlying reward learning and decision-making behavior. It is known that humans vary widely in their performance in decision-making tasks. Here, we used a simple four-armed bandit task in which subjects are almost evenly split into two groups on the basis of their performance: those who do learn to favor choice of the optimal action and those who do not. Using models of reinforcement learning we sought to determine the neural basis of these intrinsic differences in performance by scanning both groups with functional magnetic resonance imaging. We scanned 29 subjects while they performed the reward-based decision-making task. Our results suggest that these two groups differ markedly in the degree to which reinforcement learning signals in the striatum are engaged during task performance. While the learners showed robust prediction error signals in both the ventral and dorsal striatum during learning, the nonlearner group showed a marked absence of such signals. Moreover, the magnitude of prediction error signals in a region of dorsal striatum correlated significantly with a measure of behavioral performance across all subjects. These findings support a crucial role of prediction error signals, likely originating from dopaminergic midbrain neurons, in enabling learning of action selection preferences on the basis of obtained rewards. Thus, spontaneously observed individual differences in decision making performance demonstrate the suggested dependence of this type of learning on the functional integrity of the dopaminergic striatal system in humans.
Probabilistic prediction of barrier-island response to hurricanes
Plant, Nathaniel G.; Stockdon, Hilary F.
2012-01-01
Prediction of barrier-island response to hurricane attack is important for assessing the vulnerability of communities, infrastructure, habitat, and recreational assets to the impacts of storm surge, waves, and erosion. We have demonstrated that a conceptual model intended to make qualitative predictions of the type of beach response to storms (e.g., beach erosion, dune erosion, dune overwash, inundation) can be reformulated in a Bayesian network to make quantitative predictions of the morphologic response. In an application of this approach at Santa Rosa Island, FL, predicted dune-crest elevation changes in response to Hurricane Ivan explained about 20% to 30% of the observed variance. An extended Bayesian network based on the original conceptual model, which included dune elevations, storm surge, and swash, but with the addition of beach and dune widths as input variables, showed improved skill compared to the original model, explaining 70% of dune elevation change variance and about 60% of dune and shoreline position change variance. This probabilistic approach accurately represented prediction uncertainty (measured with the log likelihood ratio), and it outperformed the baseline prediction (i.e., the prior distribution based on the observations). Finally, sensitivity studies demonstrated that degrading the resolution of the Bayesian network or removing data from the calibration process reduced the skill of the predictions by 30% to 40%. The reduction in skill did not change conclusions regarding the relative importance of the input variables, and the extended model's skill always outperformed the original model.
Study of Earthquake Disaster Prediction System of Langfang city Based on GIS
NASA Astrophysics Data System (ADS)
Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei
2017-07-01
In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.
Rhodes, Louisa; Naumann, Ulrike M.
2011-01-01
Objective: To identify how decisions about treatment are being made in secondary services for anxiety disorders and depression and, specifically, whether it was possible to predict the decisions to refer for evidence-based treatments. Method: Post hoc classification tree analysis was performed using a sample from an audit on implementation of the National Institute for Health and Clinical Excellence Guidelines for Depression and Anxiety Disorders. The audit was of 5 teams offering secondary care services; they included psychiatrists, psychologists, community psychiatric nurses, social workers, dual-diagnosis workers, and vocational workers. The patient sample included all of those with a primary problem of depression (n = 56) or an anxiety disorder (n = 16) who were offered treatment from February 16 to April 3, 2009. The outcome variable was whether or not evidence-based treatments were offered, and the predictor variables were presenting problem, risk, comorbid problem, social problems, and previous psychiatric history. Results: Treatment decisions could be more accurately predicted for anxiety disorders (93% correct) than for depression (55%). For anxiety disorders, the presence or absence of social problems was a good predictor for whether evidence-based or non–evidence-based treatments were offered; 44% (4/9) of those with social problems vs 100% (6/6) of those without social problems were offered evidence-based treatments. For depression, patients’ risk rating had the largest impact on treatment decisions, although no one variable could be identified as individually predictive of all treatment decisions. Conclusions: Treatment decisions were generally consistent for anxiety disorders but more idiosyncratic for depression, making the development of a decision-making model very difficult for depression. The lack of clarity of some terms in the clinical guidelines and the more complex nature of depression could be factors contributing to this difficulty. Further research is needed to understand the complex nature of decision making with depressed patients. PMID:22295255
Application of agent-based system for bioprocess description and process improvement.
Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J
2010-01-01
Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers
Single well productivity prediction of carbonate reservoir
NASA Astrophysics Data System (ADS)
Le, Xu
2018-06-01
It is very important to predict the single-well productivity for the development of oilfields. The fracture structure of carbonate fractured-cavity reservoirs is complex, and the change of single-well productivity is inconsistent with that of sandstone reservoir. Therefore, the establishment of carbonate oil well productivity It is very important. Based on reservoir reality, three different methods for predicting the productivity of carbonate reservoirs have been established based on different types of reservoirs. (1) To qualitatively analyze the single-well capacity relations corresponding to different reservoir types, predict the production capacity according to the different wells encountered by single well; (2) Predict the productivity of carbonate reservoir wells by using numerical simulation technology; (3) According to the historical production data of oil well, fit the relevant capacity formula and make single-well productivity prediction; (4) Predict the production capacity by using oil well productivity formula of carbonate reservoir.
Lateral orbitofrontal cortex links social impressions to political choices.
Xia, Chenjie; Stolle, Dietlind; Gidengil, Elisabeth; Fellows, Lesley K
2015-06-03
Recent studies of political behavior suggest that voting decisions can be influenced substantially by "first-impression" social attributions based on physical appearance. Separate lines of research have implicated the orbitofrontal cortex (OFC) in the judgment of social traits on the one hand and economic decision-making on the other, making this region a plausible candidate for linking social attributions to voting decisions. Here, we asked whether OFC lesions in humans disrupted the ability to judge traits of political candidates or affected how these judgments influenced voting decisions. Seven patients with lateral OFC damage, 18 patients with frontal damage sparing the lateral OFC, and 53 matched healthy participants took part in a simulated election paradigm, in which they voted for real-life (but unknown) candidates based only on photographs of their faces. Consistent with previous work, attributions of "competence" and "attractiveness" based on candidate appearance predicted voting behavior in the healthy control group. Frontal damage did not affect substantially the ability to make competence or attractiveness judgments, but patients with damage to the lateral OFC differed from other groups in how they applied this information when voting. Only attractiveness ratings had any predictive power for voting choices after lateral OFC damage, whereas other frontal patients and healthy controls relied on information about both competence and attractiveness in making their choice. An intact lateral OFC may not be necessary for judgment of social traits based on physical appearance, but it seems to be crucial in applying this information in political decision-making. Copyright © 2015 the authors 0270-6474/15/358507-08$15.00/0.
Attention and choice: a review on eye movements in decision making.
Orquin, Jacob L; Mueller Loose, Simone
2013-09-01
This paper reviews studies on eye movements in decision making, and compares their observations to theoretical predictions concerning the role of attention in decision making. Four decision theories are examined: rational models, bounded rationality, evidence accumulation, and parallel constraint satisfaction models. Although most theories were confirmed with regard to certain predictions, none of the theories adequately accounted for the role of attention during decision making. Several observations emerged concerning the drivers and down-stream effects of attention on choice, suggesting that attention processes plays an active role in constructing decisions. So far, decision theories have largely ignored the constructive role of attention by assuming that it is entirely determined by heuristics, or that it consists of stochastic information sampling. The empirical observations reveal that these assumptions are implausible, and that more accurate assumptions could have been made based on prior attention and eye movement research. Future decision making research would benefit from greater integration with attention research. Copyright © 2013 Elsevier B.V. All rights reserved.
Prediction, dynamics, and visualization of antigenic phenotypes of seasonal influenza viruses
Neher, Richard A.; Bedford, Trevor; Daniels, Rodney S.; Shraiman, Boris I.
2016-01-01
Human seasonal influenza viruses evolve rapidly, enabling the virus population to evade immunity and reinfect previously infected individuals. Antigenic properties are largely determined by the surface glycoprotein hemagglutinin (HA), and amino acid substitutions at exposed epitope sites in HA mediate loss of recognition by antibodies. Here, we show that antigenic differences measured through serological assay data are well described by a sum of antigenic changes along the path connecting viruses in a phylogenetic tree. This mapping onto the tree allows prediction of antigenicity from HA sequence data alone. The mapping can further be used to make predictions about the makeup of the future A(H3N2) seasonal influenza virus population, and we compare predictions between models with serological and sequence data. To make timely model output readily available, we developed a web browser-based application that visualizes antigenic data on a continuously updated phylogeny. PMID:26951657
Tactile communication, cooperation, and performance: an ethological study of the NBA.
Kraus, Michael W; Huang, Cassey; Keltner, Dacher
2010-10-01
Tactile communication, or physical touch, promotes cooperation between people, communicates distinct emotions, soothes in times of stress, and is used to make inferences of warmth and trust. Based on this conceptual analysis, we predicted that in group competition, physical touch would predict increases in both individual and group performance. In an ethological study, we coded the touch behavior of players from the National Basketball Association (NBA) during the 2008-2009 regular season. Consistent with hypotheses, early season touch predicted greater performance for individuals as well as teams later in the season. Additional analyses confirmed that touch predicted improved performance even after accounting for player status, preseason expectations, and early season performance. Moreover, coded cooperative behaviors between teammates explained the association between touch and team performance. Discussion focused on the contributions touch makes to cooperative groups and the potential implications for other group settings. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Kulkarni, Chetan S.
2016-01-01
As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.
Leathers, Marvin L; Olson, Carl R
2017-04-01
Neurons in the lateral intraparietal (LIP) area of macaque monkey parietal cortex respond to cues predicting rewards and penalties of variable size in a manner that depends on the motivational salience of the predicted outcome (strong for both large reward and large penalty) rather than on its value (positive for large reward and negative for large penalty). This finding suggests that LIP mediates the capture of attention by salient events and does not encode value in the service of value-based decision making. It leaves open the question whether neurons elsewhere in the brain encode value in the identical task. To resolve this issue, we recorded neuronal activity in the amygdala in the context of the task employed in the LIP study. We found that responses to reward-predicting cues were similar between areas, with the majority of reward-sensitive neurons responding more strongly to cues that predicted large reward than to those that predicted small reward. Responses to penalty-predicting cues were, however, markedly different. In the amygdala, unlike LIP, few neurons were sensitive to penalty size, few penalty-sensitive neurons favored large over small penalty, and the dependence of firing rate on penalty size was negatively correlated with its dependence on reward size. These results indicate that amygdala neurons encoded cue value under circumstances in which LIP neurons exhibited sensitivity to motivational salience. However, the representation of negative value, as reflected in sensitivity to penalty size, was weaker than the representation of positive value, as reflected in sensitivity to reward size. NEW & NOTEWORTHY This is the first study to characterize amygdala neuronal responses to cues predicting rewards and penalties of variable size in monkeys making value-based choices. Manipulating reward and penalty size allowed distinguishing activity dependent on motivational salience from activity dependent on value. This approach revealed in a previous study that neurons of the lateral intraparietal (LIP) area encode motivational salience. Here, it reveals that amygdala neurons encode value. The results establish a sharp functional distinction between the two areas. Copyright © 2017 the American Physiological Society.
White, Stuart F.; Geraci, Marilla; Lewis, Elizabeth; Leshin, Joseph; Teng, Cindy; Averbeck, Bruno; Meffert, Harma; Ernst, Monique; Blair, James R.; Grillon, Christian; Blair, Karina S.
2017-01-01
Objective Deficits in reinforcement-based decision-making have been reported in Generalized Anxiety Disorder. However, the pathophysiology of these deficits is largely unknown, extant studies have mainly examined youth and the integrity of core functional processes underpinning decision-making remain undetermined. In particular, it is unclear whether the representation of reinforcement prediction error (PE: the difference between received and expected reinforcement) is disrupted in Generalized Anxiety Disorder. The current study addresses these issues in adults with the disorder. Methods Forty-six un-medicated individuals with Generalized Anxiety Disorder and 32 healthy controls group-matched on IQ, gender and age, completed a passive avoidance task while undergoing functional MRI. Results Behaviorally, individuals with Generalized Anxiety Disorder showed impaired reinforcement-based decision-making. Imaging results revealed that during feedback, individuals with Generalized Anxiety Disorder relative to healthy controls showed a reduced correlation between PE and activity within ventromedial prefrontal cortex, ventral striatum and other structures implicated in decision-making. In addition, individuals with Generalized Anxiety Disorder relative to healthy participants showed a reduced correlation between punishment, but not reward, PEs and activity within bilateral lentiform nucleus/putamen. Conclusions This is the first study to identify computational impairments during decision-making in Generalized Anxiety Disorder. PE signaling is significantly disrupted in individuals with the disorder and may underpin the decision-making deficits observed in patients with GAD. PMID:27631963
RELATING ACCUMULATOR MODEL PARAMETERS AND NEURAL DYNAMICS
Purcell, Braden A.; Palmeri, Thomas J.
2016-01-01
Accumulator models explain decision-making as an accumulation of evidence to a response threshold. Specific model parameters are associated with specific model mechanisms, such as the time when accumulation begins, the average rate of evidence accumulation, and the threshold. These mechanisms determine both the within-trial dynamics of evidence accumulation and the predicted behavior. Cognitive modelers usually infer what mechanisms vary during decision-making by seeing what parameters vary when a model is fitted to observed behavior. The recent identification of neural activity with evidence accumulation suggests that it may be possible to directly infer what mechanisms vary from an analysis of how neural dynamics vary. However, evidence accumulation is often noisy, and noise complicates the relationship between accumulator dynamics and the underlying mechanisms leading to those dynamics. To understand what kinds of inferences can be made about decision-making mechanisms based on measures of neural dynamics, we measured simulated accumulator model dynamics while systematically varying model parameters. In some cases, decision- making mechanisms can be directly inferred from dynamics, allowing us to distinguish between models that make identical behavioral predictions. In other cases, however, different parameterized mechanisms produce surprisingly similar dynamics, limiting the inferences that can be made based on measuring dynamics alone. Analyzing neural dynamics can provide a powerful tool to resolve model mimicry at the behavioral level, but we caution against drawing inferences based solely on neural analyses. Instead, simultaneous modeling of behavior and neural dynamics provides the most powerful approach to understand decision-making and likely other aspects of cognition and perception. PMID:28392584
Recommendation Techniques for Drug-Target Interaction Prediction and Drug Repositioning.
Alaimo, Salvatore; Giugno, Rosalba; Pulvirenti, Alfredo
2016-01-01
The usage of computational methods in drug discovery is a common practice. More recently, by exploiting the wealth of biological knowledge bases, a novel approach called drug repositioning has raised. Several computational methods are available, and these try to make a high-level integration of all the knowledge in order to discover unknown mechanisms. In this chapter, we review drug-target interaction prediction methods based on a recommendation system. We also give some extensions which go beyond the bipartite network case.
NASA Astrophysics Data System (ADS)
Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.
1996-02-01
Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.
Fukunishi, Yoshifumi
2010-01-01
For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.
Discrepancy-based and anticipated emotions in behavioral self-regulation.
Brown, Christina M; McConnell, Allen R
2011-10-01
Discrepancies between one's current and desired states evoke negative emotions, which presumably guide self-regulation. In the current work we evaluated the function of discrepancy-based emotions in behavioral self-regulation. Contrary to classic theories of self-regulation, discrepancy-based emotions did not predict the degree to which people engaged in self-regulatory behavior. Instead, expectations about how future self-discrepancies would make one feel (i.e., anticipated emotions) predicted self-regulation. However, anticipated emotions were influenced by previous discrepancy-based emotional experiences, suggesting that the latter do not directly motivate self-regulation but rather guide expectations. These findings are consistent with the perspective that emotions do not necessarily direct immediate behavior, but rather have an indirect effect by guiding expectations, which in turn predict goal-directed action.
Adaptive MPC based on MIMO ARX-Laguerre model.
Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais
2017-03-01
This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Kowinsky, Amy M; Shovel, Judith; McLaughlin, Maribeth; Vertacnik, Lisa; Greenhouse, Pamela K; Martin, Susan Christie; Minnier, Tamra E
2012-01-01
Predictable and unpredictable patient care tasks compete for caregiver time and attention, making it difficult for patient care staff to reliably and consistently meet patient needs. We have piloted a redesigned care model that separates the work of patient care technicians based on task predictability and creates role specificity. This care model shows promise in improving the ability of staff to reliably complete tasks in a more consistent and timely manner.
The Density Functional Theory of Flies: Predicting distributions of interacting active organisms
NASA Astrophysics Data System (ADS)
Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas
On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.
Computational chemistry in 25 years
NASA Astrophysics Data System (ADS)
Abagyan, Ruben
2012-01-01
Here we are making some predictions based on three methods: a straightforward extrapolations of the existing trends; a self-fulfilling prophecy; and picking some current grievances and predicting that they will be addressed or solved. We predict the growth of multicore computing and dramatic growth of data, as well as the improvements in force fields and sampling methods. We also predict that effects of therapeutic and environmental molecules on human body, as well as complex natural chemical signalling will be understood in terms of three dimensional models of their binding to specific pockets.
The Evidential Basis of Decision Making in Plant Disease Management.
Hughes, Gareth
2017-08-04
The evidential basis for disease management decision making is provided by data relating to risk factors. The decision process involves an assessment of the evidence leading to taking (or refraining from) action on the basis of a prediction. The primary objective of the decision process is to identify-at the time the decision is made-the control action that provides the best predicted end-of-season outcome, calculated in terms of revenue or another appropriate metric. Data relating to disease risk factors may take a variety of forms (e.g., continuous, discrete, categorical) on measurement scales in a variety of units. Log 10 -likelihood ratios provide a principled basis for the accumulation of evidence based on such data and allow predictions to be made via Bayesian updating of prior probabilities.
K. E. Gibos; A. Slijepcevic; T. Wells; L. Fogarty
2015-01-01
Wildland fire managers must frequently make meaning from chaos in order to protect communities and infrastructure from the negative impacts of fire. Fire management personnel are increasingly turning to science to support their experience-based decision-making processes and to provide clear, confident leadership for communities frequently exposed to risk from wildfire...
This presentation describes EPA efforts to collect, model, and measure publically available consumer product data for use in exposure assessment. The development of the ORD Chemicals and Products database will be described, as will machine-learning based models for predicting ch...
Individual differences in bodily freezing predict emotional biases in decision making
Ly, Verena; Huys, Quentin J. M.; Stins, John F.; Roelofs, Karin; Cools, Roshan
2014-01-01
Instrumental decision making has long been argued to be vulnerable to emotional responses. Literature on multiple decision making systems suggests that this emotional biasing might reflect effects of a system that regulates innately specified, evolutionarily preprogrammed responses. To test this hypothesis directly, we investigated whether effects of emotional faces on instrumental action can be predicted by effects of emotional faces on bodily freezing, an innately specified response to aversive relative to appetitive cues. We tested 43 women using a novel emotional decision making task combined with posturography, which involves a force platform to detect small oscillations of the body to accurately quantify postural control in upright stance. On the platform, participants learned whole body approach-avoidance actions based on monetary feedback, while being primed by emotional faces (angry/happy). Our data evidence an emotional biasing of instrumental action. Thus, angry relative to happy faces slowed instrumental approach relative to avoidance responses. Critically, individual differences in this emotional biasing effect were predicted by individual differences in bodily freezing. This result suggests that emotional biasing of instrumental action involves interaction with a system that controls innately specified responses. Furthermore, our findings help bridge (animal and human) decision making and emotion research to advance our mechanistic understanding of decision making anomalies in daily encounters as well as in a wide range of psychopathology. PMID:25071491
Clarke, M G; Kennedy, K P; MacDonagh, R P
2009-01-01
To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.
Healthcare provider perceptions of clinical prediction rules
Richardson, Safiya; Khan, Sundas; McCullagh, Lauren; Kline, Myriam; Mann, Devin; McGinn, Thomas
2015-01-01
Objectives To examine internal medicine and emergency medicine healthcare provider perceptions of usefulness of specific clinical prediction rules. Setting The study took place in two academic medical centres. A web-based survey was distributed and completed by participants between 1 January and 31 May 2013. Participants Medical doctors, doctors of osteopathy or nurse practitioners employed in the internal medicine or emergency medicine departments at either institution. Primary and secondary outcome measures The primary outcome was to identify the clinical prediction rules perceived as most useful by healthcare providers specialising in internal medicine and emergency medicine. Secondary outcomes included comparing usefulness scores of specific clinical prediction rules based on provider specialty, and evaluating associations between usefulness scores and perceived characteristics of these clinical prediction rules. Results Of the 401 healthcare providers asked to participate, a total of 263 (66%), completed the survey. The CHADS2 score was chosen by most internal medicine providers (72%), and Pulmonary Embolism Rule-Out Criteria (PERC) score by most emergency medicine providers (45%), as one of the top three most useful from a list of 24 clinical prediction rules. Emergency medicine providers rated their top three significantly more positively, compared with internal medicine providers, as having a better fit into their workflow (p=0.004), helping more with decision-making (p=0.037), better fitting into their thought process when diagnosing patients (p=0.001) and overall, on a 10-point scale, more useful (p=0.009). For all providers, the perceived qualities of useful at point of care, helps with decision making, saves time diagnosing, fits into thought process, and should be the standard of clinical care correlated highly (≥0.65) with overall 10-point usefulness scores. Conclusions Healthcare providers describe clear preferences for certain clinical prediction rules, based on medical specialty. PMID:26338684
Rajabi, Mohamadreza; Mansourian, Ali; Bazmani, Ahad
2012-11-01
Visceral leishmaniasis (VL) is a vector-borne disease, highly influenced by environmental factors, which is an increasing public health problem in Iran, especially in the north-western part of the country. A geographical information system was used to extract data and map environmental variables for all villages in the districts of Kalaybar and Ahar in the province of East Azerbaijan. An attempt to predict VL prevalence based on an analytical hierarchy process (AHP) module combined with ordered weighted averaging (OWA) with fuzzy quantifiers indicated that the south-eastern part of Ahar is particularly prone to high VL prevalence. With the main objective to locate the villages most at risk, the opinions of experts and specialists were generalised into a group decision-making process by means of fuzzy weighting methods and induced OWA. The prediction model was applied throughout the entire study area (even where the disease is prevalent and where data already exist). The predicted data were compared with registered VL incidence records in each area. The results suggest that linguistic fuzzy quantifiers, guided by an AHP-OWA model, are capable of predicting susceptive locations for VL prevalence with an accuracy exceeding 80%. The group decision-making process demonstrated that people in 15 villages live under particularly high risk for VL contagion, i.e. villages where the disease is highly prevalent. The findings of this study are relevant for the planning of effective control strategies for VL in northwest Iran.
Big data learning and suggestions in modern apps
NASA Astrophysics Data System (ADS)
Sharma, G.; Nadesh, R. K.; ArivuSelvan, K.
2017-11-01
Among many other tasks involved for emergent location-based applications such as those involved in prescribing touring places and those focused on publicizing based on destination, destination prediction is vital. Dealing with destination prediction involves determining the probability of a location (destination) depending on historical trajectories. In this paper, a destination prediction based on probabilistic model (Machine Learning Model) feed-forward neural networks will be presented, which will work by making the observation of driver’s habits. Some individuals drive to same locations such as work involving same route every day of the working week. Here, streaming of real-time driving data will be sent through Kafka queue in apache storm for real-time processing and finally storing the data in MongoDB.
Model-based influences on humans' choices and striatal prediction errors.
Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J
2011-03-24
The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.
Real-Time Analysis of a Sensor's Data for Automated Decision Making in an IoT-Based Smart Home.
Khan, Nida Saddaf; Ghani, Sayeed; Haider, Sajjad
2018-05-25
IoT devices frequently generate large volumes of streaming data and in order to take advantage of this data, their temporal patterns must be learned and identified. Streaming data analysis has become popular after being successfully used in many applications including forecasting electricity load, stock market prices, weather conditions, etc. Artificial Neural Networks (ANNs) have been successfully utilized in understanding the embedded interesting patterns/behaviors in the data and forecasting the future values based on it. One such pattern is modelled and learned in the present study to identify the occurrence of a specific pattern in a Water Management System (WMS). This prediction aids in making an automatic decision support system, to switch OFF a hydraulic suction pump at the appropriate time. Three types of ANN, namely Multi-Input Multi-Output (MIMO), Multi-Input Single-Output (MISO), and Recurrent Neural Network (RNN) have been compared, for multi-step-ahead forecasting, on a sensor's streaming data. Experiments have shown that RNN has the best performance among three models and based on its prediction, a system can be implemented to make the best decision with 86% accuracy.
2016-10-01
Reports an error in "When Does Making Detailed Predictions Make Predictions Worse" by Theresa F. Kelly and Joseph P. Simmons ( Journal of Experimental Psychology: General , Advanced Online Publication, Aug 8, 2016, np). In the article, the symbols in Figure 2 were inadvertently altered in production. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-37952-001.) In this article, we investigate whether making detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be susceptible to the negative effect of making detailed predictions. PsycINFO Database Record (c) 2016 APA, all rights reserved
Network approaches for expert decisions in sports.
Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus
2012-04-01
This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
Drifter-based estimate of the 5 year dispersal of Fukushima-derived radionuclides
NASA Astrophysics Data System (ADS)
Rypina, I. I.; Jayne, S. R.; Yoshida, S.; Macdonald, A. M.; Buesseler, K.
2014-11-01
Employing some 40 years of North Pacific drifter-track observations from the Global Drifter Program database, statistics defining the horizontal spread of radionuclides from Fukushima nuclear power plant into the Pacific Ocean are investigated over a time scale of 5 years. A novel two-iteration method is employed to make the best use of the available drifter data. Drifter-based predictions of the temporal progression of the leading edge of the radionuclide distribution are compared to observed radionuclide concentrations from research surveys occupied in 2012 and 2013. Good agreement between the drifter-based predictions and the observations is found.
Modelling ecological systems in a changing world
Evans, Matthew R.
2012-01-01
The world is changing at an unprecedented rate. In such a situation, we need to understand the nature of the change and to make predictions about the way in which it might affect systems of interest; often we may also wish to understand what might be done to mitigate the predicted effects. In ecology, we usually make such predictions (or forecasts) by making use of mathematical models that describe the system and projecting them into the future, under changed conditions. Approaches emphasizing the desirability of simple models with analytical tractability and those that use assumed causal relationships derived statistically from data currently dominate ecological modelling. Although such models are excellent at describing the way in which a system has behaved, they are poor at predicting its future state, especially in novel conditions. In order to address questions about the impact of environmental change, and to understand what, if any, action might be taken to ameliorate it, ecologists need to develop the ability to project models into novel, future conditions. This will require the development of models based on understanding the processes that result in a system behaving the way it does, rather than relying on a description of the system, as a whole, remaining valid indefinitely. PMID:22144381
Henriques, D. A.; Ladbury, J. E.; Jackson, R. M.
2000-01-01
The prediction of binding energies from the three-dimensional (3D) structure of a protein-ligand complex is an important goal of biophysics and structural biology. Here, we critically assess the use of empirical, solvent-accessible surface area-based calculations for the prediction of the binding of Src-SH2 domain with a series of tyrosyl phosphopeptides based on the high-affinity ligand from the hamster middle T antigen (hmT), where the residue in the pY+ 3 position has been changed. Two other peptides based on the C-terminal regulatory site of the Src protein and the platelet-derived growth factor receptor (PDGFR) are also investigated. Here, we take into account the effects of proton linkage on binding, and test five different surface area-based models that include different treatments for the contributions to conformational change and protein solvation. These differences relate to the treatment of conformational flexibility in the peptide ligand and the inclusion of proximal ordered solvent molecules in the surface area calculations. This allowed the calculation of a range of thermodynamic state functions (deltaCp, deltaS, deltaH, and deltaG) directly from structure. Comparison with the experimentally derived data shows little agreement for the interaction of SrcSH2 domain and the range of tyrosyl phosphopeptides. Furthermore, the adoption of the different models to treat conformational change and solvation has a dramatic effect on the calculated thermodynamic functions, making the predicted binding energies highly model dependent. While empirical, solvent-accessible surface area based calculations are becoming widely adopted to interpret thermodynamic data, this study highlights potential problems with application and interpretation of this type of approach. There is undoubtedly some agreement between predicted and experimentally determined thermodynamic parameters: however, the tolerance of this approach is not sufficient to make it ubiquitously applicable. PMID:11106171
Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture
Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang
2016-01-01
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176
Comprehensible knowledge model creation for cancer treatment decision making.
Afzal, Muhammad; Hussain, Maqbool; Ali Khan, Wajahat; Ali, Taqdir; Lee, Sungyoung; Huh, Eui-Nam; Farooq Ahmad, Hafiz; Jamshed, Arif; Iqbal, Hassan; Irfan, Muhammad; Abbas Hydari, Manzar
2017-03-01
A wealth of clinical data exists in clinical documents in the form of electronic health records (EHRs). This data can be used for developing knowledge-based recommendation systems that can assist clinicians in clinical decision making and education. One of the big hurdles in developing such systems is the lack of automated mechanisms for knowledge acquisition to enable and educate clinicians in informed decision making. An automated knowledge acquisition methodology with a comprehensible knowledge model for cancer treatment (CKM-CT) is proposed. With the CKM-CT, clinical data are acquired automatically from documents. Quality of data is ensured by correcting errors and transforming various formats into a standard data format. Data preprocessing involves dimensionality reduction and missing value imputation. Predictive algorithm selection is performed on the basis of the ranking score of the weighted sum model. The knowledge builder prepares knowledge for knowledge-based services: clinical decisions and education support. Data is acquired from 13,788 head and neck cancer (HNC) documents for 3447 patients, including 1526 patients of the oral cavity site. In the data quality task, 160 staging values are corrected. In the preprocessing task, 20 attributes and 106 records are eliminated from the dataset. The Classification and Regression Trees (CRT) algorithm is selected and provides 69.0% classification accuracy in predicting HNC treatment plans, consisting of 11 decision paths that yield 11 decision rules. Our proposed methodology, CKM-CT, is helpful to find hidden knowledge in clinical documents. In CKM-CT, the prediction models are developed to assist and educate clinicians for informed decision making. The proposed methodology is generalizable to apply to data of other domains such as breast cancer with a similar objective to assist clinicians in decision making and education. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Bayesian-Based System to Assess Wave-Driven Flooding Hazards on Coral Reef-Lined Coasts
NASA Astrophysics Data System (ADS)
Pearson, S. G.; Storlazzi, C. D.; van Dongeren, A. R.; Tissier, M. F. S.; Reniers, A. J. H. M.
2017-12-01
Many low-elevation, coral reef-lined, tropical coasts are vulnerable to the effects of climate change, sea level rise, and wave-induced flooding. The considerable morphological diversity of these coasts and the variability of the hydrodynamic forcing that they are exposed to make predicting wave-induced flooding a challenge. A process-based wave-resolving hydrodynamic model (XBeach Non-Hydrostatic, "XBNH") was used to create a large synthetic database for use in a "Bayesian Estimator for Wave Attack in Reef Environments" (BEWARE), relating incident hydrodynamics and coral reef geomorphology to coastal flooding hazards on reef-lined coasts. Building on previous work, BEWARE improves system understanding of reef hydrodynamics by examining the intrinsic reef and extrinsic forcing factors controlling runup and flooding on reef-lined coasts. The Bayesian estimator has high predictive skill for the XBNH model outputs that are flooding indicators, and was validated for a number of available field cases. It was found that, in order to accurately predict flooding hazards, water depth over the reef flat, incident wave conditions, and reef flat width are the most essential factors, whereas other factors such as beach slope and bed friction due to the presence or absence of corals are less important. BEWARE is a potentially powerful tool for use in early warning systems or risk assessment studies, and can be used to make projections about how wave-induced flooding on coral reef-lined coasts may change due to climate change.
Nakao, Takashi; Ohira, Hideki; Northoff, Georg
2012-01-01
Most experimental studies of decision-making have specifically examined situations in which a single less-predictable correct answer exists (externally guided decision-making under uncertainty). Along with such externally guided decision-making, there are instances of decision-making in which no correct answer based on external circumstances is available for the subject (internally guided decision-making). Such decisions are usually made in the context of moral decision-making as well as in preference judgment, where the answer depends on the subject’s own, i.e., internal, preferences rather than on external, i.e., circumstantial, criteria. The neuronal and psychological mechanisms that allow guidance of decisions based on more internally oriented criteria in the absence of external ones remain unclear. This study was undertaken to compare decision-making of these two kinds empirically and theoretically. First, we reviewed studies of decision-making to clarify experimental–operational differences between externally guided and internally guided decision-making. Second, using multi-level kernel density analysis, a whole-brain-based quantitative meta-analysis of neuroimaging studies was performed. Our meta-analysis revealed that the neural network used predominantly for internally guided decision-making differs from that for externally guided decision-making under uncertainty. This result suggests that studying only externally guided decision-making under uncertainty is insufficient to account for decision-making processes in the brain. Finally, based on the review and results of the meta-analysis, we discuss the differences and relations between decision-making of these two types in terms of their operational, neuronal, and theoretical characteristics. PMID:22403525
Why significant variables aren't automatically good predictors.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2015-11-10
Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.
NASA Astrophysics Data System (ADS)
Totani, Tomonori; Takeuchi, Tsutomu T.
2002-05-01
We give an explanation for the origin of various properties observed in local infrared galaxies and make predictions for galaxy counts and cosmic background radiation (CBR) using a new model extended from that for optical/near-infrared galaxies. Important new characteristics of this study are that (1) mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies and that (2) the large-grain dust temperature Tdust is calculated based on a physical consideration for energy balance rather than by using the empirical relation between Tdust and total infrared luminosity LIR found in local galaxies, which has been employed in most previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, LIR-Tdust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μm) and CBR using this model. We found results considerably different from those of most previous works based on the empirical LIR-Tdust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40-80 K), as often seen in starburst galaxies or ultraluminous infrared galaxies in the local and high-z universe. This indicates that intense starbursts of forming elliptical galaxies should have occurred at z~2-3, in contrast to the previous results that significant starbursts beyond z~1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE detections of FIR CBR. The intergalactic optical depth of TeV gamma rays based on our model is also presented.
Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Mengshoel, Ole
2008-01-01
Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.
Chronic Motivational State Interacts with Task Reward Structure in Dynamic Decision-Making
Cooper, Jessica A.; Worthy, Darrell A.; Maddox, W. Todd
2015-01-01
Research distinguishes between a habitual, model-free system motivated toward immediately rewarding actions, and a goal-directed, model-based system motivated toward actions that improve future state. We examined the balance of processing in these two systems during state-based decision-making. We tested a regulatory fit hypothesis (Maddox & Markman, 2010) that predicts that global trait motivation affects the balance of habitual- vs. goal-directed processing but only through its interaction with the task framing as gain-maximization or loss-minimization. We found support for the hypothesis that a match between an individual’s chronic motivational state and the task framing enhances goal-directed processing, and thus state-based decision-making. Specifically, chronic promotion-focused individuals under gain-maximization and chronic prevention-focused individuals under loss-minimization both showed enhanced state-based decision-making. Computational modeling indicates that individuals in a match between global chronic motivational state and local task reward structure engaged more goal-directed processing, whereas those in a mismatch engaged more habitual processing. PMID:26520256
HUMAN DECISIONS AND MACHINE PREDICTIONS.
Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil
2018-02-01
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior).
HUMAN DECISIONS AND MACHINE PREDICTIONS*
Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil
2018-01-01
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior) PMID:29755141
Wright, Julie A.; Velicer, Wayne F.; Prochaska, James O.
2009-01-01
This study evaluated how well predictions from the transtheoretical model (TTM) generalized from smoking to diet. Longitudinal data were used from a randomized control trial on reducing dietary fat consumption in adults (n =1207) recruited from primary care practices. Predictive power was evaluated by making a priori predictions of the magnitude of change expected in the TTM constructs of temptation, pros and cons, and 10 processes of change when an individual transitions between the stages of change. Generalizability was evaluated by testing predictions based on smoking data. Three sets of predictions were made for each stage: Precontemplation (PC), Contemplation (C) and Preparation (PR) based on stage transition categories of no progress, progress and regression determined by stage at baseline versus stage at the 12-month follow-up. Univariate analysis of variance between stage transition groups was used to calculate the effect size [omega squared (ω2)]. For diet predictions based on diet data, there was a high degree of confirmation: 92%, 95% and 92% for PC, C and PR, respectively. For diet predictions based on smoking data, 77%, 79% and 85% were confirmed, respectively, suggesting a moderate degree of generalizability. This study revised effect size estimates for future theory testing on the TTM applied to dietary fat. PMID:18400785
Thermal breakage of a discrete one-dimensional string.
Lee, Chiu Fan
2009-09-01
We study the thermal breakage of a discrete one-dimensional string, with open and fixed ends, in the heavily damped regime. Basing our analysis on the multidimensional Kramers escape theory, we are able to make analytical predictions on the mean breakage rate and on the breakage propensity with respect to the breakage location on the string. We then support our predictions with numerical simulations.
Predicting the Timing and Location of the next Hawaiian Volcano
ERIC Educational Resources Information Center
Russo, Joseph; Mattox, Stephen; Kildau, Nicole
2010-01-01
The wealth of geologic data on Hawaiian volcanoes makes them ideal for study by middle school students. In this paper the authors use existing data on the age and location of Hawaiian volcanoes to predict the location of the next Hawaiian volcano and when it will begin to grow on the floor of the Pacific Ocean. An inquiry-based lesson is also…
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Celaya, Jose R.; Goebel, Kai; Biswas, Gautam
2012-01-01
Electrolytic capacitors are used in several applications ranging from power supplies for safety critical avionics equipment to power drivers for electro-mechanical actuator. Past experiences show that capacitors tend to degrade and fail faster when subjected to high electrical or thermal stress conditions during operations. This makes them good candidates for prognostics and health management. Model-based prognostics captures system knowledge in the form of physics-based models of components in order to obtain accurate predictions of end of life based on their current state of heal th and their anticipated future use and operational conditions. The focus of this paper is on deriving first principles degradation models for thermal stress conditions and implementing Bayesian framework for making remaining useful life predictions. Data collected from simultaneous experiments are used to validate the models. Our overall goal is to derive accurate models of capacitor degradation, and use them to remaining useful life in DC-DC converters.
Does warmer China land attract more super typhoons?
Xu, Xiangde; Peng, Shiqiu; Yang, Xiangjing; Xu, Hongxiong; Tong, Daniel Q.; Wang, Dongxiao; Guo, Yudi; Chan, Johnny C. L.; Chen, Lianshou; Yu, Wei; Li, Yineng; Lai, Zhijuan; Zhang, Shengjun
2013-01-01
Accurate prediction of where and when typhoons (or named hurricanes which form over the North Atlantic Ocean) will make landfall is critical to protecting human lives and properties. Although the traditional method of typhoon track prediction based on the steering flow theory has been proven to be an effective way in most situations, it slipped up in some cases. Our analysis of the long-term Chinese typhoon records reveals that typhoons, especially super typhoons (those with maximum sustained surface winds of greater than 51 ms−1), have a trend to make landfalls toward warmer land in China over the past 50 years (1960–2009). Numerical sensitivity experiments using an advanced atmospheric model further confirm this finding. Our finding suggests an alternative approach to predict the landfall tracks of the most devastating typhoons in the southeastern China. PMID:23519311
Information theory of adaptation in neurons, behavior, and mood.
Sharpee, Tatyana O; Calhoun, Adam J; Chalasani, Sreekanth H
2014-04-01
The ability to make accurate predictions of future stimuli and consequences of one's actions are crucial for the survival and appropriate decision-making. These predictions are constantly being made at different levels of the nervous system. This is evidenced by adaptation to stimulus parameters in sensory coding, and in learning of an up-to-date model of the environment at the behavioral level. This review will discuss recent findings that actions of neurons and animals are selected based on detailed stimulus history in such a way as to maximize information for achieving the task at hand. Information maximization dictates not only how sensory coding should adapt to various statistical aspects of stimuli, but also that reward function should adapt to match the predictive information from past to future. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hilbig, Benjamin E; Pohl, Rüdiger F
2009-09-01
According to part of the adaptive toolbox notion of decision making known as the recognition heuristic (RH), the decision process in comparative judgments-and its duration-is determined by whether recognition discriminates between objects. By contrast, some recently proposed alternative models predict that choices largely depend on the amount of evidence speaking for each of the objects and that decision times thus depend on the evidential difference between objects, or the degree of conflict between options. This article presents 3 experiments that tested predictions derived from the RH against those from alternative models. All experiments used naturally recognized objects without teaching participants any information and thus provided optimal conditions for application of the RH. However, results supported the alternative, evidence-based models and often conflicted with the RH. Recognition was not the key determinant of decision times, whereas differences between objects with respect to (both positive and negative) evidence predicted effects well. In sum, alternative models that allow for the integration of different pieces of information may well provide a better account of comparative judgments. (c) 2009 APA, all rights reserved.
Cooper, Jeffrey C.; Dunne, Simon; Furey, Teresa; O’Doherty, John P.
2012-01-01
Humans frequently make real-world decisions based on rapid evaluations of minimal information – for example, should we talk to an attractive stranger at a party? Little is known, however, about how the brain makes rapid evaluations with real and immediate social consequences. To address this question, we scanned participants with FMRI while they viewed photos of individuals that they subsequently met at real-life “speed-dating” events. Neural activity in two areas of dorsomedial prefrontal cortex, paracingulate cortex and rostromedial prefrontal cortex (RMPFC), was predictive of whether each individual would be ultimately pursued for a romantic relationship or rejected. Activity in these areas was attributable to two distinct components of romantic evaluation: either consensus judgments about physical beauty (paracingulate cortex) or individualized preferences based on a partner’s perceived personality (RMPFC). These data identify novel computational roles for these regions of the dorsomedial prefrontal cortex in even very rapid social evaluations. Even a first glance, then, can accurately predict romantic desire, but that glance involves a mix of physical and psychological judgments that depend on specific regions of dorsomedial prefrontal cortex. PMID:23136406
Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian
2016-01-01
With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.
SVM and SVM Ensembles in Breast Cancer Prediction.
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.
SVM and SVM Ensembles in Breast Cancer Prediction
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807
SAbPred: a structure-based antibody prediction server
Dunbar, James; Krawczyk, Konrad; Leem, Jinwoo; Marks, Claire; Nowak, Jaroslaw; Regep, Cristian; Georges, Guy; Kelm, Sebastian; Popovic, Bojana; Deane, Charlotte M.
2016-01-01
SAbPred is a server that makes predictions of the properties of antibodies focusing on their structures. Antibody informatics tools can help improve our understanding of immune responses to disease and aid in the design and engineering of therapeutic molecules. SAbPred is a single platform containing multiple applications which can: number and align sequences; automatically generate antibody variable fragment homology models; annotate such models with estimated accuracy alongside sequence and structural properties including potential developability issues; predict paratope residues; and predict epitope patches on protein antigens. The server is available at http://opig.stats.ox.ac.uk/webapps/sabpred. PMID:27131379
Thomas, John M; Fried, Terri R
2018-05-01
Studies examining the attitudes of clinicians toward prognostication for older adults have focused on life expectancy prediction. Little is known about whether clinicians approach prognostication in other ways. To describe how clinicians approach prognostication for older adults, defined broadly as making projections about patients' future health. In five focus groups, 30 primary care clinicians from community-based, academic-affiliated, and Veterans Affairs primary care practices were given open-ended questions about how they make projections about their patients' future health and how this informs the approach to care. Content analysis was used to organize responses into themes. Clinicians spoke about future health in terms of a variety of health outcomes in addition to life expectancy, including independence in activities and decision making, quality of life, avoiding hospitalization, and symptom burden. They described approaches in predicting these health outcomes, including making observations about the overall trajectory of patients to predict health outcomes and recognizing increased risk for adverse health outcomes. Clinicians expressed reservations about using estimates of mortality risk and life expectancy to think about and communicate patients' future health. They discussed ways in which future research might help them in thinking about and discussing patients' future health to guide care decisions, including identifying when and whether interventions might impact future health. The perspectives of primary care clinicians in this study confirm that prognostic considerations can go beyond precise estimates of mortality risk and life expectancy to include a number of outcomes and approaches to predicting those outcomes. Published by Elsevier Inc.
Influence of Emotionally Charged Information on Category-Based Induction
Zhu, Jennifer; Murphy, Gregory L.
2013-01-01
Categories help us make predictions, or inductions, about new objects. However, we cannot always be certain that a novel object belongs to the category we are using to make predictions. In such cases, people should use multiple categories to make inductions. Past research finds that people often use only the most likely category to make inductions, even if it is not certain. In two experiments, subjects read stories and answered questions about items whose categorization was uncertain. In Experiment 1, the less likely category was either emotionally neutral or dangerous (emotionally charged or likely to pose a threat). Subjects used multiple categories in induction when one of the categories was dangerous but not when they were all neutral. In Experiment 2, the most likely category was dangerous. Here, people used multiple categories, but there was also an effect of avoidance, in which people denied that dangerous categories were the most likely. The attention-grabbing power of dangerous categories may be balanced by a higher-level strategy to reject them. PMID:23372700
What is adaptive about adaptive decision making? A parallel constraint satisfaction account.
Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc
2014-12-01
There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.
Continuous track paths reveal additive evidence integration in multistep decision making.
Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom
2017-10-03
Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.
ERIC Educational Resources Information Center
Lereya, Suzet Tanya; Wolke, Dieter
2013-01-01
Background: Prenatal stress has been shown to predict persistent behavioural abnormalities in offspring. Unknown is whether prenatal stress makes children more vulnerable to peer victimisation. Methods: The current study is based on the Avon Longitudinal Study of Parents and Children, a prospective community-based study. Family adversity, maternal…
Application of Algebra Curriculum-Based Measurements for Decision Making in Middle and High School
ERIC Educational Resources Information Center
Johnson, Evelyn S.; Galow, Patricia A.; Allenger, Robert
2013-01-01
This article reports the results of a study examining the utility of curriculum-based measurement (CBM) in algebra for predicting performance on a state math assessment and informing instructional placement decisions for students in seventh, eighth, and tenth grades. Students completed six Basic Skills algebra probes across different time…
An Observation on the Spontaneous Noticing of Prospective Memory Event-Based Cues
ERIC Educational Resources Information Center
Knight, Justin B.; Meeks, J. Thadeus; Marsh, Richard L.; Cook, Gabriel I.; Brewer, Gene A.; Hicks, Jason L.
2011-01-01
In event-based prospective memory, current theories make differing predictions as to whether intention-related material can be spontaneously noticed (i.e., noticed without relying on preparatory attentional processes). In 2 experiments, participants formed an intention that was contextually associated to the final phase of the experiment, and…
A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding
ERIC Educational Resources Information Center
Cuevas, Joshua; Dawson, Bryan L.
2018-01-01
This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Decision making under uncertainty in a spiking neural network model of the basal ganglia.
Héricé, Charlotte; Khalil, Radwa; Moftah, Marie; Boraud, Thomas; Guthrie, Martin; Garenne, André
2016-12-01
The mechanisms of decision-making and action selection are generally thought to be under the control of parallel cortico-subcortical loops connecting back to distinct areas of cortex through the basal ganglia and processing motor, cognitive and limbic modalities of decision-making. We have used these properties to develop and extend a connectionist model at a spiking neuron level based on a previous rate model approach. This model is demonstrated on decision-making tasks that have been studied in primates and the electrophysiology interpreted to show that the decision is made in two steps. To model this, we have used two parallel loops, each of which performs decision-making based on interactions between positive and negative feedback pathways. This model is able to perform two-level decision-making as in primates. We show here that, before learning, synaptic noise is sufficient to drive the decision-making process and that, after learning, the decision is based on the choice that has proven most likely to be rewarded. The model is then submitted to lesion tests, reversal learning and extinction protocols. We show that, under these conditions, it behaves in a consistent manner and provides predictions in accordance with observed experimental data.
NASA Astrophysics Data System (ADS)
Baek, Seung Ki; Minnhagen, Petter; Kim, Beom Jun
2011-07-01
In Korean culture, the names of family members are recorded in special family books. This makes it possible to follow the distribution of Korean family names far back in history. It is shown here that these name distributions are well described by a simple null model, the random group formation (RGF) model. This model makes it possible to predict how the name distributions change and these predictions are shown to be borne out. In particular, the RGF model predicts that for married women entering a collection of family books in a certain year, the occurrence of the most common family name 'Kim' should be directly proportional to the total number of married women with the same proportionality constant for all the years. This prediction is also borne out to a high degree. We speculate that it reflects some inherent social stability in the Korean culture. In addition, we obtain an estimate of the total population of the Korean culture down to the year 500 AD, based on the RGF model, and find about ten thousand Kims.
PANNZER2: a rapid functional annotation web server.
Törönen, Petri; Medlar, Alan; Holm, Liisa
2018-05-08
The unprecedented growth of high-throughput sequencing has led to an ever-widening annotation gap in protein databases. While computational prediction methods are available to make up the shortfall, a majority of public web servers are hindered by practical limitations and poor performance. Here, we introduce PANNZER2 (Protein ANNotation with Z-scoRE), a fast functional annotation web server that provides both Gene Ontology (GO) annotations and free text description predictions. PANNZER2 uses SANSparallel to perform high-performance homology searches, making bulk annotation based on sequence similarity practical. PANNZER2 can output GO annotations from multiple scoring functions, enabling users to see which predictions are robust across predictors. Finally, PANNZER2 predictions scored within the top 10 methods for molecular function and biological process in the CAFA2 NK-full benchmark. The PANNZER2 web server is updated on a monthly schedule and is accessible at http://ekhidna2.biocenter.helsinki.fi/sanspanz/. The source code is available under the GNU Public Licence v3.
Hoffmann, Janina A; von Helversen, Bettina; Rieskamp, Jörg
2014-12-01
Making accurate judgments is an essential skill in everyday life. Although how different memory abilities relate to categorization and judgment processes has been hotly debated, the question is far from resolved. We contribute to the solution by investigating how individual differences in memory abilities affect judgment performance in 2 tasks that induced rule-based or exemplar-based judgment strategies. In a study with 279 participants, we investigated how working memory and episodic memory affect judgment accuracy and strategy use. As predicted, participants switched strategies between tasks. Furthermore, structural equation modeling showed that the ability to solve rule-based tasks was predicted by working memory, whereas episodic memory predicted judgment accuracy in the exemplar-based task. Last, the probability of choosing an exemplar-based strategy was related to better episodic memory, but strategy selection was unrelated to working memory capacity. In sum, our results suggest that different memory abilities are essential for successfully adopting different judgment strategies. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Laurent, Vincent; Balleine, Bernard W
2015-04-20
The capacity to extract causal knowledge from the environment allows us to predict future events and to use those predictions to decide on a course of action. Although evidence of such causal reasoning has long been described, recent evidence suggests that using predictive knowledge to guide decision-making in this way is predicated on reasoning about causes in two quite distinct ways: choosing an action can be based on the interaction between predictive information and the consequences of that action, or, alternatively, actions can be selected based on the consequences that they do not produce. The latter counterfactual reasoning is highly adaptive because it allows us to use information about both present and absent events to guide decision-making. Nevertheless, although there is now evidence to suggest that animals other than humans, including rats and birds, can engage in causal reasoning of one kind or another, there is currently no evidence that they use counterfactual reasoning to guide choice. To assess this question, we gave rats the opportunity to learn new action-outcome relationships, after which we probed the structure of this learning by presenting excitatory and inhibitory cues predicting that the specific outcomes of their actions would either occur or would not occur. Whereas the excitors biased choice toward the action delivering the predicted outcome, the inhibitory cues selectively elevated actions predicting the absence of the inhibited outcome, suggesting that rats encoded the counterfactual action-outcome mappings and were able to use them to guide choice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dai, Zi-Ru; Ai, Chun-Zhi; Ge, Guang-Bo; He, Yu-Qi; Wu, Jing-Jing; Wang, Jia-Yue; Man, Hui-Zi; Jia, Yan; Yang, Ling
2015-06-30
Early prediction of xenobiotic metabolism is essential for drug discovery and development. As the most important human drug-metabolizing enzyme, cytochrome P450 3A4 has a large active cavity and metabolizes a broad spectrum of substrates. The poor substrate specificity of CYP3A4 makes it a huge challenge to predict the metabolic site(s) on its substrates. This study aimed to develop a mechanism-based prediction model based on two key parameters, including the binding conformation and the reaction activity of ligands, which could reveal the process of real metabolic reaction(s) and the site(s) of modification. The newly established model was applied to predict the metabolic site(s) of steroids; a class of CYP3A4-preferred substrates. 38 steroids and 12 non-steroids were randomly divided into training and test sets. Two major metabolic reactions, including aliphatic hydroxylation and N-dealkylation, were involved in this study. At least one of the top three predicted metabolic sites was validated by the experimental data. The overall accuracy for the training and test were 82.14% and 86.36%, respectively. In summary, a mechanism-based prediction model was established for the first time, which could be used to predict the metabolic site(s) of CYP3A4 on steroids with high predictive accuracy.
Ge, Shufan; Tu, Yifan; Hu, Ming
2017-01-01
Glucuronidation is the most important phase II metabolic pathway which is responsible for the clearance of many endogenous and exogenous compounds. To better understand the elimination process for compounds undergoing glucuronidation and identify compounds with desirable in vivo pharmacokinetic properties, many efforts have been made to predict in vivo glucuronidation using in vitro data. In this article, we reviewed typical approaches used in previous predictions. The problems and challenges in prediction of glucuronidation were discussed. Besides that different incubation conditions can affect the prediction accuracy, other factors including efflux / uptake transporters, enterohepatic recycling, and deglucuronidation reactions also contribute to the disposition of glucuronides and make the prediction more difficult. PBPK modeling, which can describe more complicated process in vivo, is a promising prediction strategy which may greatly improve the prediction of glucuronidation and potential DDIs involving glucuronidation. Based on previous studies, we proposed a transport-glucuronidation classification system, which was built based on the kinetics of both glucuronidation and transport of the glucuronide. This system could be a very useful tool to achieve better in vivo predictions. PMID:28966903
Prediction versus aetiology: common pitfalls and how to avoid them.
van Diepen, Merel; Ramspek, Chava L; Jager, Kitty J; Zoccali, Carmine; Dekker, Friedo W
2017-04-01
Prediction research is a distinct field of epidemiologic research, which should be clearly separated from aetiological research. Both prediction and aetiology make use of multivariable modelling, but the underlying research aim and interpretation of results are very different. Aetiology aims at uncovering the causal effect of a specific risk factor on an outcome, adjusting for confounding factors that are selected based on pre-existing knowledge of causal relations. In contrast, prediction aims at accurately predicting the risk of an outcome using multiple predictors collectively, where the final prediction model is usually based on statistically significant, but not necessarily causal, associations in the data at hand.In both scientific and clinical practice, however, the two are often confused, resulting in poor-quality publications with limited interpretability and applicability. A major problem is the frequently encountered aetiological interpretation of prediction results, where individual variables in a prediction model are attributed causal meaning. This article stresses the differences in use and interpretation of aetiological and prediction studies, and gives examples of common pitfalls. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Are power calculations useful? A multicentre neuroimaging study
Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward
2014-01-01
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267
A First Step towards a Clinical Decision Support System for Post-traumatic Stress Disorders.
Ma, Sisi; Galatzer-Levy, Isaac R; Wang, Xuya; Fenyö, David; Shalev, Arieh Y
2016-01-01
PTSD is distressful and debilitating, following a non-remitting course in about 10% to 20% of trauma survivors. Numerous risk indicators of PTSD have been identified, but individual level prediction remains elusive. As an effort to bridge the gap between scientific discovery and practical application, we designed and implemented a clinical decision support pipeline to provide clinically relevant recommendation for trauma survivors. To meet the specific challenge of early prediction, this work uses data obtained within ten days of a traumatic event. The pipeline creates personalized predictive model for each individual, and computes quality metrics for each predictive model. Clinical recommendations are made based on both the prediction of the model and its quality, thus avoiding making potentially detrimental recommendations based on insufficient information or suboptimal model. The current pipeline outperforms the acute stress disorder, a commonly used clinical risk factor for PTSD development, both in terms of sensitivity and specificity.
Mbeutcha, Aurélie; Mathieu, Romain; Rouprêt, Morgan; Gust, Kilian M; Briganti, Alberto; Karakiewicz, Pierre I; Shariat, Shahrokh F
2016-10-01
In the context of customized patient care for upper tract urothelial carcinoma (UTUC), decision-making could be facilitated by risk assessment and prediction tools. The aim of this study was to provide a critical overview of existing predictive models and to review emerging promising prognostic factors for UTUC. A literature search of articles published in English from January 2000 to June 2016 was performed using PubMed. Studies on risk group stratification models and predictive tools in UTUC were selected, together with studies on predictive factors and biomarkers associated with advanced-stage UTUC and oncological outcomes after surgery. Various predictive tools have been described for advanced-stage UTUC assessment, disease recurrence and cancer-specific survival (CSS). Most of these models are based on well-established prognostic factors such as tumor stage, grade and lymph node (LN) metastasis, but some also integrate newly described prognostic factors and biomarkers. These new prediction tools seem to reach a high level of accuracy, but they lack external validation and decision-making analysis. The combinations of patient-, pathology- and surgery-related factors together with novel biomarkers have led to promising predictive tools for oncological outcomes in UTUC. However, external validation of these predictive models is a prerequisite before their introduction into daily practice. New models predicting response to therapy are urgently needed to allow accurate and safe individualized management in this heterogeneous disease.
Effects of urban microcellular environments on ray-tracing-based coverage predictions.
Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing
2016-09-01
The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.
Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg
2017-09-18
Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this special population.
Uncertainty quantification in downscaling procedures for effective decisions in energy systems
NASA Astrophysics Data System (ADS)
Constantinescu, E. M.
2010-12-01
Weather is a major driver both of energy supply and demand, and with the massive adoption of renewable energy sources and changing economic and producer-consumer paradigms, the management of the next-generation energy systems is becoming ever more challenging. The operational and planning decisions in energy systems are guided by efficiency and reliability, and therefore a central role in these decisions will be played by the ability to obtain weather condition forecasts with accurate uncertainty estimates. The appropriate temporal and spatial resolutions needed for effective decision-making, be it operational or planning, is not clear. It is arguably certain however, that such temporal scales as hourly variations of temperature or wind conditions and ramp events are essential in this process. Planning activities involve decade or decades-long projections of weather. One sensible way to achieve this is to embed regional weather models in a global climate system. This strategy acts as a downscaling procedure. Uncertainty modeling techniques must be developed in order to quantify and minimize forecast errors as well as target variables that impact the decision-making process the most. We discuss the challenges of obtaining a realistic uncertainty quantification estimate using mathematical algorithms based on scalable matrix-free computations and physics-based statistical models. The process of making decisions for energy management systems based on future weather scenarios is a very complex problem. We shall focus on the challenges in generating wind power predictions based on regional weather predictions, and discuss the implications of making the common assumptions about the uncertainty models.
An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai
2012-01-01
Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.
Tailoring Configuration to User’s Tasks under Uncertainty
2008-04-28
CARISMA is the problem being solved. CARISMA applies microeconom- ics and game theory to make runtime decisions about allocating scarce resources among...scarce resources, these applications are running on be- half of one user. Thus, our problem has no game theoretic aspects. 2.2 Task Oriented...prediction tool [15] is based on the RPS tool and allows prediction of bandwidth online . There is additional evidence (see, for example [49
ERIC Educational Resources Information Center
Barker, Erin T.; Galambos, Nancy L.
2007-01-01
The current study explored how body dissatisfaction and challenges associated with the transition to university predicted symptoms of binge eating. Participants were 101 female full-time first-year university students (M=18.3 years of age; SD=0.50) who completed a background questionnaire and a web-based daily checklist assessing binge eating.…
NASA Astrophysics Data System (ADS)
Park, Jeong-Gyun; Jee, Joon-Bum
2017-04-01
Dangerous weather such as severe rain, heavy snow, drought and heat wave caused by climate change make more damage in the urban area that dense populated and industry areas. Urban areas, unlike the rural area, have big population and transportation, dense the buildings and fuel consumption. Anthropogenic factors such as road energy balance, the flow of air in the urban is unique meteorological phenomena. However several researches are in process about prediction of urban meteorology. ASAPS (Advanced Storm-scale Analysis and Prediction System) predicts a severe weather with very short range (prediction with 6 hour) and high resolution (every hour with time and 1 km with space) on Seoul metropolitan area based on KLAPS (Korea Local Analysis and Prediction System) from KMA (Korea Meteorological Administration). This system configured three parts that make a background field (SUF5), analysis field (SU01) with observation and forecast field with high resolution (SUF1). In this study, we improve a high-resolution ASAPS model and perform a sensitivity test for the rainfall case. The improvement of ASAPS include model domain configuration, high resolution topographic data and data assimilation with WISE observation data.
Müller, Martin; Seidenberg, Ruth; Schuh, Sabine K; Exadaktylos, Aristomenis K; Schechter, Clyde B; Leichtle, Alexander B; Hautz, Wolf E
2018-01-01
Patients presenting with suspected urinary tract infection are common in every day emergency practice. Urine flow cytometry has replaced microscopic urine evaluation in many emergency departments, but interpretation of the results remains challenging. The aim of this study was to develop and validate tools that predict urine culture growth out of urine flow cytometry parameter. This retrospective study included all adult patients that presented in a large emergency department between January and July 2017 with a suspected urinary tract infection and had a urine flow cytometry as well as a urine culture obtained. The objective was to identify urine flow cytometry parameters that reliably predict urine culture growth and mixed flora growth. The data set was split into a training (70%) and a validation set (30%) and different decision-making approaches were developed and validated. Relevant urine culture growth (respectively mixed flora growth) was found in 40.2% (7.2% respectively) of the 613 patients included. The number of leukocytes and bacteria in flow cytometry were highly associated with urine culture growth, but mixed flora growth could not be sufficiently predicted from the urine flow cytometry parameters. A decision tree, predictive value figures, a nomogram, and a cut-off table to predict urine culture growth from bacteria and leukocyte count were developed, validated and compared. Urine flow cytometry parameters are insufficient to predict mixed flora growth. However, the prediction of urine culture growth based on bacteria and leukocyte count is highly accurate and the developed tools should be used as part of the decision-making process of ordering a urine culture or starting an antibiotic therapy if a urogenital infection is suspected.
Seidenberg, Ruth; Schuh, Sabine K.; Exadaktylos, Aristomenis K.; Schechter, Clyde B.; Leichtle, Alexander B.; Hautz, Wolf E.
2018-01-01
Objective Patients presenting with suspected urinary tract infection are common in every day emergency practice. Urine flow cytometry has replaced microscopic urine evaluation in many emergency departments, but interpretation of the results remains challenging. The aim of this study was to develop and validate tools that predict urine culture growth out of urine flow cytometry parameter. Methods This retrospective study included all adult patients that presented in a large emergency department between January and July 2017 with a suspected urinary tract infection and had a urine flow cytometry as well as a urine culture obtained. The objective was to identify urine flow cytometry parameters that reliably predict urine culture growth and mixed flora growth. The data set was split into a training (70%) and a validation set (30%) and different decision-making approaches were developed and validated. Results Relevant urine culture growth (respectively mixed flora growth) was found in 40.2% (7.2% respectively) of the 613 patients included. The number of leukocytes and bacteria in flow cytometry were highly associated with urine culture growth, but mixed flora growth could not be sufficiently predicted from the urine flow cytometry parameters. A decision tree, predictive value figures, a nomogram, and a cut-off table to predict urine culture growth from bacteria and leukocyte count were developed, validated and compared. Conclusions Urine flow cytometry parameters are insufficient to predict mixed flora growth. However, the prediction of urine culture growth based on bacteria and leukocyte count is highly accurate and the developed tools should be used as part of the decision-making process of ordering a urine culture or starting an antibiotic therapy if a urogenital infection is suspected. PMID:29474463
A community resource benchmarking predictions of peptide binding to MHC-I molecules.
Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro
2006-06-09
Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.
Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.
Ko, Chien-Ho
2013-01-01
Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.
NASA Astrophysics Data System (ADS)
Tárnok, Attila; Mittag, Anja; Lenz, Dominik
2006-02-01
The goal of predictive medicine is the detection of changes in patient's state prior to the clinical manifestation of the deterioration of the patients current status. Therefore, both the diagnostic of diseases like cancer, coronary atherosclerosis or congenital heart failure and the prognosis of the effect specific therapeutics on patients outcome are the main fields of predictive medicine. Clinical Cytomcs is based on the analysis of specimens from the patient by Cytomic technologies that are mainly imaging based techniques and their combinations with other assays. Predictive medicine aims at the recognition of the "fate" of each individual patients in order to yield unequivocal indications for decision making (i.e. how does the patient respond to therapy, react to medication etc.). This individualized prediction is based on the Predictive Medicine by Clinical Cytomics concept. These considerations have recently stimulated the idea of the Human Cytome Project. A major focus of the Human Cytome Project is multiplexed cy-tomic analysis of individual cells of the patient, extraction of predictive information and individual prediction that merges into individualized therapy. Although still at the beginning, Clinical Cytomics is a promising new field that may change therapy in the near future for the benefit of the patients.
Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks
2013-01-01
Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism. PMID:23864830
NASA Astrophysics Data System (ADS)
Liu, Zhenchen; Lu, Guihua; He, Hai; Wu, Zhiyong; He, Jian
2018-01-01
Reliable drought prediction is fundamental for water resource managers to develop and implement drought mitigation measures. Considering that drought development is closely related to the spatial-temporal evolution of large-scale circulation patterns, we developed a conceptual prediction model of seasonal drought processes based on atmospheric and oceanic standardized anomalies (SAs). Empirical orthogonal function (EOF) analysis is first applied to drought-related SAs at 200 and 500 hPa geopotential height (HGT) and sea surface temperature (SST). Subsequently, SA-based predictors are built based on the spatial pattern of the first EOF modes. This drought prediction model is essentially the synchronous statistical relationship between 90-day-accumulated atmospheric-oceanic SA-based predictors and SPI3 (3-month standardized precipitation index), calibrated using a simple stepwise regression method. Predictor computation is based on forecast atmospheric-oceanic products retrieved from the NCEP Climate Forecast System Version 2 (CFSv2), indicating the lead time of the model depends on that of CFSv2. The model can make seamless drought predictions for operational use after a year-to-year calibration. Model application to four recent severe regional drought processes in China indicates its good performance in predicting seasonal drought development, despite its weakness in predicting drought severity. Overall, the model can be a worthy reference for seasonal water resource management in China.
Ensemble-based prediction of RNA secondary structures.
Aghaeepour, Nima; Hoos, Holger H
2013-04-24
Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.
Completing the Link between Exposure Science and ...
Driven by major scientific advances in analytical methods, biomonitoring, computation, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition from a field of observation to a field of prediction. Deployment of an organizational and predictive framework for exposure science analogous to the “systems approaches” used in the biological sciences is a necessary step in this evolution. Here we propose the aggregate exposure pathway (AEP) concept as the natural and complementary companion in the exposure sciences to the adverse outcome pathway (AOP) concept in the toxicological sciences. Aggregate exposure pathways offer an intuitive framework to organize exposure data within individual units of prediction common to the field, setting the stage for exposure forecasting. Looking farther ahead, we envision direct linkages between aggregate exposure pathways and adverse outcome pathways, completing the source to outcome continuum for more meaningful integration of exposure assessment and hazard identification. Together, the two frameworks form and inform a decision-making framework with the flexibility for risk-based, hazard-based, or exposure-based decision making. The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teeguarden, Justin G.; Tan, Yu-Mei; Edwards, Stephen W.
Driven by major scientific advances in analytical methods, biomonitoring, and computational exposure assessment, and a newly articulated vision for a greater impact in public health, the field of exposure science is undergoing a rapid transition from a field of observation to a field of prediction. Deployment of an organizational and predictive framework for exposure science analogous to the computationally enabled “systems approaches” used in the biological sciences is a necessary step in this evolution. Here we propose the aggregate exposure pathway (AEP) concept as the natural and complementary companion in the exposure sciences to the adverse outcome pathway (AOP) conceptmore » in the toxicological sciences. The AEP framework offers an intuitive approach to successful organization of exposure science data within individual units of prediction common to the field, setting the stage for exposure forecasting. Looking farther ahead, we envision direct linkages between aggregate exposure pathway and adverse outcome pathways, completing the source to outcome continuum and setting the stage for more efficient integration of exposure science and toxicity testing information. Together these frameworks form and inform a decision making framework with the flexibility for risk-based, hazard-based or exposure-based decisions.« less
Distributing flight dynamics products via the World Wide Web
NASA Technical Reports Server (NTRS)
Woodard, Mark; Matusow, David
1996-01-01
The NASA Flight Dynamics Products Center (FDPC), which make available selected operations products via the World Wide Web, is reported on. The FDPC can be accessed from any host machine connected to the Internet. It is a multi-mission service which provides Internet users with unrestricted access to the following standard products: antenna contact predictions; ground tracks; orbit ephemerides; mean and osculating orbital elements; earth sensor sun and moon interference predictions; space flight tracking data network summaries; and Shuttle transport system predictions. Several scientific data bases are available through the service.
Identity appropriateness and the structure of the theory of planned behaviour.
Case, Philippa; Sparks, Paul; Pavey, Louisa
2016-03-01
In contrast to the cost-benefit, utility-based approach to decision-making implicit in models such as the theory of planned behaviour (TPB), the logic of appropriateness (March, 1994. A Primer on Decision Making: How decisions happen. New York, NY: The Free Press) describes decision-making in terms of heuristic decision rules that involve matching identities to situations. This research is the first to apply the logic of appropriateness in conjunction with the theoretical structure of the TPB and assessed whether a measure of identity appropriateness might independently predict adults' intentions to engage in binge drinking. In Study 1, participants (N = 197) completed questionnaires assessing attitudes, subjective norm, perceived behavioural control, past behaviour, and identity appropriateness in relation to binge drinking. Path analysis revealed an independent predictive effect of identity appropriateness on intentions in addition to an indirect effect via attitudes. In Study 2 (N = 179), a prospective measure of behaviour was included in a similar study: Identity appropriateness again predicted intentions independently of the extended TPB predictors. It was again also found to be a strong predictor of attitudes. We suggest that the notion of identity appropriateness may assist in explaining the capacity of measures of self-identity to predict people's behavioural intentions. © 2015 The British Psychological Society.
Models of Affective Decision Making: How Do Feelings Predict Choice?
Charpentier, Caroline J; De Neve, Jan-Emmanuel; Li, Xinyi; Roiser, Jonathan P; Sharot, Tali
2016-06-01
Intuitively, how you feel about potential outcomes will determine your decisions. Indeed, an implicit assumption in one of the most influential theories in psychology, prospect theory, is that feelings govern choice. Surprisingly, however, very little is known about the rules by which feelings are transformed into decisions. Here, we specified a computational model that used feelings to predict choices. We found that this model predicted choice better than existing value-based models, showing a unique contribution of feelings to decisions, over and above value. Similar to the value function in prospect theory, our feeling function showed diminished sensitivity to outcomes as value increased. However, loss aversion in choice was explained by an asymmetry in how feelings about losses and gains were weighted when making a decision, not by an asymmetry in the feelings themselves. The results provide new insights into how feelings are utilized to reach a decision. © The Author(s) 2016.
Framework for making better predictions by directly estimating variables' predictivity.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2016-12-13
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.
Framework for making better predictions by directly estimating variables’ predictivity
Chernoff, Herman; Lo, Shaw-Hwa
2016-01-01
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830
Application of General Regression Neural Network to the Prediction of LOD Change
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao
2012-01-01
Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.
The role of predictive uncertainty in the operational management of reservoirs
NASA Astrophysics Data System (ADS)
Todini, E.
2014-09-01
The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.
Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A; Desai, Atman; Fischer, Adina; Labropoulos, Nicos; Roberts, David W
2014-03-01
Precise delineation of individualized risks of morbidity and mortality is crucial in decision making in cerebrovascular neurosurgery. The authors attempted to create a predictive model of complications in patients undergoing cerebral aneurysm clipping (CAC). The authors performed a retrospective cohort study of patients who had undergone CAC in the period from 2005 to 2009 and were registered in the Nationwide Inpatient Sample (NIS) database. A model for outcome prediction based on preoperative individual patient characteristics was developed. Of the 7651 patients in the NIS who underwent CAC, 3682 (48.1%) had presented with unruptured aneurysms and 3969 (51.9%) with subarachnoid hemorrhage. The respective inpatient postoperative risks for death, unfavorable discharge, stroke, treated hydrocephalus, cardiac complications, deep vein thrombosis, pulmonary embolism, and acute renal failure were 0.7%, 15.3%, 5.3%, 1.5%, 1.3%, 0.6%, 2.0%, and 0.1% for those with unruptured aneurysms and 11.5%, 52.8%, 5.5%, 39.2%, 1.7%, 2.8%, 2.7%, and 0.8% for those with ruptured aneurysms. Multivariate analysis identified risk factors independently associated with the above outcomes. A validated model for outcome prediction based on individual patient characteristics was developed. The accuracy of the model was estimated using the area under the receiver operating characteristic curve, and it was found to have good discrimination. The featured model can provide individualized estimates of the risks of postoperative complications based on preoperative conditions and can potentially be used as an adjunct in decision making in cerebrovascular neurosurgery.
Physics-of-Failure Approach to Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.
2017-01-01
As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus
2011-01-01
In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.
3-d brownian motion simulator for high-sensitivity nanobiotechnological applications.
Toth, Arpád; Banky, Dániel; Grolmusz, Vince
2011-12-01
A wide variety of nanobiotechnologic applications are being developed for nanoparticle based in vitro diagnostic and imaging systems. Some of these systems make possible highly sensitive detection of molecular biomarkers. Frequently, the very low concentration of the biomarkers makes impossible the classical, partial differential equation-based mathematical simulation of the motion of the nanoparticles involved. We present a three-dimensional Brownian motion simulation tool for the prediction of the movement of nanoparticles in various thermal, viscosity, and geometric settings in a rectangular cuvette. For nonprofit users the server is freely available at the site http://brownian.pitgroup.org.
Leake, Devin
2015-01-01
As scientists make strides toward the goal of developing a form of biological engineering that's as predictive and reliable as chemical engineering is for chemistry, one technology component has become absolutely critical: gene synthesis. Gene synthesis is the process of building stretches of deoxyribonucleic acid (DNA) to order--some stretches based on DNA that exists already in nature, some based on novel designs intended to accomplish new functions. This process is the foundation of synthetic biology, which is rapidly becoming the engineering counterpart to biology.
Exploring Cognitive Relations Between Prediction in Language and Music.
Patel, Aniruddh D; Morgan, Emily
2017-03-01
The online processing of both music and language involves making predictions about upcoming material, but the relationship between prediction in these two domains is not well understood. Electrophysiological methods for studying individual differences in prediction in language processing have opened the door to new questions. Specifically, we ask whether individuals with musical training predict upcoming linguistic material more strongly and/or more accurately than non-musicians. We propose two reasons why prediction in these two domains might be linked: (a) Musicians may have greater verbal short-term/working memory; (b) music may specifically reward predictions based on hierarchical structure. We provide suggestions as to how to expand upon recent work on individual differences in language processing to test these hypotheses. Copyright © 2016 Cognitive Science Society, Inc.
EFICAz2: enzyme function inference by a combined approach enhanced by machine learning.
Arakaki, Adrian K; Huang, Ying; Skolnick, Jeffrey
2009-04-13
We previously developed EFICAz, an enzyme function inference approach that combines predictions from non-completely overlapping component methods. Two of the four components in the original EFICAz are based on the detection of functionally discriminating residues (FDRs). FDRs distinguish between member of an enzyme family that are homofunctional (classified under the EC number of interest) or heterofunctional (annotated with another EC number or lacking enzymatic activity). Each of the two FDR-based components is associated to one of two specific kinds of enzyme families. EFICAz exhibits high precision performance, except when the maximal test to training sequence identity (MTTSI) is lower than 30%. To improve EFICAz's performance in this regime, we: i) increased the number of predictive components and ii) took advantage of consensual information from the different components to make the final EC number assignment. We have developed two new EFICAz components, analogs to the two FDR-based components, where the discrimination between homo and heterofunctional members is based on the evaluation, via Support Vector Machine models, of all the aligned positions between the query sequence and the multiple sequence alignments associated to the enzyme families. Benchmark results indicate that: i) the new SVM-based components outperform their FDR-based counterparts, and ii) both SVM-based and FDR-based components generate unique predictions. We developed classification tree models to optimally combine the results from the six EFICAz components into a final EC number prediction. The new implementation of our approach, EFICAz2, exhibits a highly improved prediction precision at MTTSI < 30% compared to the original EFICAz, with only a slight decrease in prediction recall. A comparative analysis of enzyme function annotation of the human proteome by EFICAz2 and KEGG shows that: i) when both sources make EC number assignments for the same protein sequence, the assignments tend to be consistent and ii) EFICAz2 generates considerably more unique assignments than KEGG. Performance benchmarks and the comparison with KEGG demonstrate that EFICAz2 is a powerful and precise tool for enzyme function annotation, with multiple applications in genome analysis and metabolic pathway reconstruction. The EFICAz2 web service is available at: http://cssb.biology.gatech.edu/skolnick/webservice/EFICAz2/index.html.
Automated Protocol for Large-Scale Modeling of Gene Expression Data.
Hall, Michelle Lynn; Calkins, David; Sherman, Woody
2016-11-28
With the continued rise of phenotypic- and genotypic-based screening projects, computational methods to analyze, process, and ultimately make predictions in this field take on growing importance. Here we show how automated machine learning workflows can produce models that are predictive of differential gene expression as a function of a compound structure using data from A673 cells as a proof of principle. In particular, we present predictive models with an average accuracy of greater than 70% across a highly diverse ∼1000 gene expression profile. In contrast to the usual in silico design paradigm, where one interrogates a particular target-based response, this work opens the opportunity for virtual screening and lead optimization for desired multitarget gene expression profiles.
White, Stuart F; Geraci, Marilla; Lewis, Elizabeth; Leshin, Joseph; Teng, Cindy; Averbeck, Bruno; Meffert, Harma; Ernst, Monique; Blair, James R; Grillon, Christian; Blair, Karina S
2017-02-01
Deficits in reinforcement-based decision making have been reported in generalized anxiety disorder. However, the pathophysiology of these deficits is largely unknown; published studies have mainly examined adolescents, and the integrity of core functional processes underpinning decision making remains undetermined. In particular, it is unclear whether the representation of reinforcement prediction error (PE) (the difference between received and expected reinforcement) is disrupted in generalized anxiety disorder. This study addresses these issues in adults with the disorder. Forty-six unmedicated individuals with generalized anxiety disorder and 32 healthy comparison subjects group-matched on IQ, gender, and age performed a passive avoidance task while undergoing functional MRI. Data analyses were performed using a computational modeling approach. Behaviorally, individuals with generalized anxiety disorder showed impaired reinforcement-based decision making. Imaging results revealed that during feedback, individuals with generalized anxiety disorder relative to healthy subjects showed a reduced correlation between PE and activity within the ventromedial prefrontal cortex, ventral striatum, and other structures implicated in decision making. In addition, individuals with generalized anxiety disorder relative to healthy participants showed a reduced correlation between punishment PEs, but not reward PEs, and activity within the left and right lentiform nucleus/putamen. This is the first study to identify computational impairments during decision making in generalized anxiety disorder. PE signaling is significantly disrupted in individuals with the disorder and may lead to their decision-making deficits and excessive worry about everyday problems by disrupting the online updating ("reality check") of the current relationship between the expected values of current response options and the actual received rewards and punishments.
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
Frequency, probability, and prediction: easy solutions to cognitive illusions?
Griffin, D; Buehler, R
1999-02-01
Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical results imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic predictions about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based predictions are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in predictions of students' real-life projects. When aggregate and single-case predictions are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little predictive validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case predictions, though still optimistically biased. Results from studies of overconfidence in general knowledge and base rate neglect in categorical prediction underline a general conclusion. Frequentistic predictions made for sets of events are no more statistically sophisticated, nor more accurate, than predictions made for individual events using subjective probability. Copyright 1999 Academic Press.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model , is able to model the rate of occurrence of...which adds specificity to the model and can make nonlinear data more manageable. Early results show that the 1. REPORT DATE (DD-MM-YYYY) 4. TITLE
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
Chronic motivational state interacts with task reward structure in dynamic decision-making.
Cooper, Jessica A; Worthy, Darrell A; Maddox, W Todd
2015-12-01
Research distinguishes between a habitual, model-free system motivated toward immediately rewarding actions, and a goal-directed, model-based system motivated toward actions that improve future state. We examined the balance of processing in these two systems during state-based decision-making. We tested a regulatory fit hypothesis (Maddox & Markman, 2010) that predicts that global trait motivation affects the balance of habitual- vs. goal-directed processing but only through its interaction with the task framing as gain-maximization or loss-minimization. We found support for the hypothesis that a match between an individual's chronic motivational state and the task framing enhances goal-directed processing, and thus state-based decision-making. Specifically, chronic promotion-focused individuals under gain-maximization and chronic prevention-focused individuals under loss-minimization both showed enhanced state-based decision-making. Computational modeling indicates that individuals in a match between global chronic motivational state and local task reward structure engaged more goal-directed processing, whereas those in a mismatch engaged more habitual processing. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miltiadis Alamaniotis; Vivek Agarwal
This paper places itself in the realm of anticipatory systems and envisions monitoring and control methods being capable of making predictions over system critical parameters. Anticipatory systems allow intelligent control of complex systems by predicting their future state. In the current work, an intelligent model aimed at implementing anticipatory monitoring and control in energy industry is presented and tested. More particularly, a set of support vector regressors (SVRs) are trained using both historical and observed data. The trained SVRs are used to predict the future value of the system based on current operational system parameter. The predicted values are thenmore » inputted to a fuzzy logic based module where the values are fused to obtain a single value, i.e., final system output prediction. The methodology is tested on real turbine degradation datasets. The outcome of the approach presented in this paper highlights the superiority over single support vector regressors. In addition, it is shown that appropriate selection of fuzzy sets and fuzzy rules plays an important role in improving system performance.« less
How We Choose One over Another: Predicting Trial-by-Trial Preference Decision
Bhushan, Vidya; Saha, Goutam; Lindsen, Job; Shimojo, Shinsuke; Bhattacharya, Joydeep
2012-01-01
Preference formation is a complex problem as it is subjective, involves emotion, is led by implicit processes, and changes depending on the context even within the same individual. Thus, scientific attempts to predict preference are challenging, yet quite important for basic understanding of human decision making mechanisms, but prediction in a group-average sense has only a limited significance. In this study, we predicted preferential decisions on a trial by trial basis based on brain responses occurring before the individuals made their decisions explicit. Participants made a binary preference decision of approachability based on faces while their electrophysiological responses were recorded. An artificial neural network based pattern-classifier was used with time-frequency resolved patterns of a functional connectivity measure as features for the classifier. We were able to predict preference decisions with a mean accuracy of 74.3±2.79% at participant-independent level and of 91.4±3.8% at participant-dependent level. Further, we revealed a causal role of the first impression on final decision and demonstrated the temporal trajectory of preference decision formation. PMID:22912859
NASA Astrophysics Data System (ADS)
Xie, Yan; Li, Mu; Zhou, Jin; Zheng, Chang-zheng
2009-07-01
Agricultural machinery total power is an important index to reflex and evaluate the level of agricultural mechanization. It is the power source of agricultural production, and is the main factors to enhance the comprehensive agricultural production capacity expand production scale and increase the income of the farmers. Its demand is affected by natural, economic, technological and social and other "grey" factors. Therefore, grey system theory can be used to analyze the development of agricultural machinery total power. A method based on genetic algorithm optimizing grey modeling process is introduced in this paper. This method makes full use of the advantages of the grey prediction model and characteristics of genetic algorithm to find global optimization. So the prediction model is more accurate. According to data from a province, the GM (1, 1) model for predicting agricultural machinery total power was given based on the grey system theories and genetic algorithm. The result indicates that the model can be used as agricultural machinery total power an effective tool for prediction.
Applying temporal abstraction and case-based reasoning to predict approaching influenza waves.
Schmidt, Rainer; Gierl, Lothar
2002-01-01
The goal of the TeCoMed project is to send early warnings against forthcoming waves or even epidemics of infectious diseases, especially of influenza, to interested practitioners, pharmacists etc. in the German federal state Mecklenburg-Western Pomerania. The forecast of these waves is based on written confirmations of unfitness for work of the main German health insurance company. Since influenza waves are difficult to predict because of their cyclic but not regular behaviour, statistical methods based on the computation of mean values are not helpful. Instead, we have developed a prognostic model that makes use of similar former courses. Our method combines Case-based Reasoning with Temporal Abstraction to decide whether early warning is appropriate.
Carman, Christián; Díez, José
2015-08-01
The goal of this paper, both historical and philosophical, is to launch a new case into the scientific realism debate: geocentric astronomy. Scientific realism about unobservables claims that the non-observational content of our successful/justified empirical theories is true, or approximately true. The argument that is currently considered the best in favor of scientific realism is the No Miracles Argument: the predictive success of a theory that makes (novel) observational predictions while making use of non-observational content would be inexplicable unless such non-observational content approximately corresponds to the world "out there". Laudan's pessimistic meta-induction challenged this argument, and realists reacted by moving to a "selective" version of realism: the approximately true part of the theory is not its full non-observational content but only the part of it that is responsible for the novel, successful observational predictions. Selective scientific realism has been tested against some of the theories in Laudan's list, but the first member of this list, geocentric astronomy, has been traditionally ignored. Our goal here is to defend that Ptolemy's Geocentrism deserves attention and poses a prima facie strong case against selective realism, since it made several successful, novel predictions based on theoretical hypotheses that do not seem to be retained, not even approximately, by posterior theories. Here, though, we confine our work just to the detailed reconstruction of what we take to be the main novel, successful Ptolemaic predictions, leaving the full analysis and assessment of their significance for the realist thesis to future works. Copyright © 2015. Published by Elsevier Ltd.
Krogh-Jespersen, Sheila; Woodward, Amanda L
2014-01-01
Previous research has shown that young infants perceive others' actions as structured by goals. One open question is whether the recruitment of this understanding when predicting others' actions imposes a cognitive challenge for young infants. The current study explored infants' ability to utilize their knowledge of others' goals to rapidly predict future behavior in complex social environments and distinguish goal-directed actions from other kinds of movements. Fifteen-month-olds (N = 40) viewed videos of an actor engaged in either a goal-directed (grasping) or an ambiguous (brushing the back of her hand) action on a Tobii eye-tracker. At test, critical elements of the scene were changed and infants' predictive fixations were examined to determine whether they relied on goal information to anticipate the actor's future behavior. Results revealed that infants reliably generated goal-based visual predictions for the grasping action, but not for the back-of-hand behavior. Moreover, response latencies were longer for goal-based predictions than for location-based predictions, suggesting that goal-based predictions are cognitively taxing. Analyses of areas of interest indicated that heightened attention to the overall scene, as opposed to specific patterns of attention, was the critical indicator of successful judgments regarding an actor's future goal-directed behavior. These findings shed light on the processes that support "smart" social behavior in infants, as it may be a challenge for young infants to use information about others' intentions to inform rapid predictions.
Miller, W B; Pasta, D J
2001-01-01
In this study we develop and then test a couple model of contraceptive method choice decision-making following a pregnancy scare. The central constructs in our model are satisfaction with one's current method and confidence in the use of it. Downstream in the decision sequence, satisfaction and confidence predict desires and intentions to change methods. Upstream they are predicted by childbearing motivations, contraceptive attitudes, and the residual effects of the couples' previous method decisions. We collected data from 175 mostly unmarried and racially/ethnically diverse couples who were seeking pregnancy tests. We used LISREL and its latent variable capacity to estimate a structural equation model of the couple decision-making sequence leading to a change (or not) in contraceptive method. Results confirm most elements in our model and demonstrate a number of important cross-partner effects. Almost one-half of the sample had positive pregnancy tests and the base model fitted to this subsample indicates less accuracy in partner perception and greater influence of the female partner on method change decision-making. The introduction of some hypothesis-generating exogenous variables to our base couple model, together with some unexpected findings for the contraceptive attitude variables, suggest interesting questions that require further exploration.
McMeekin, Peter; Flynn, Darren; Ford, Gary A; Rodgers, Helen; Gray, Jo; Thomson, Richard G
2015-11-11
Individualised prediction of outcomes can support clinical and shared decision making. This paper describes the building of such a model to predict outcomes with and without intravenous thrombolysis treatment following ischaemic stroke. A decision analytic model (DAM) was constructed to establish the likely balance of benefits and risks of treating acute ischaemic stroke with thrombolysis. Probability of independence, (modified Rankin score mRS ≤ 2), dependence (mRS 3 to 5) and death at three months post-stroke was based on a calibrated version of the Stroke-Thrombolytic Predictive Instrument using data from routinely treated stroke patients in the Safe Implementation of Treatments in Stroke (SITS-UK) registry. Predictions in untreated patients were validated using data from the Virtual International Stroke Trials Archive (VISTA). The probability of symptomatic intracerebral haemorrhage in treated patients was incorporated using a scoring model from Safe Implementation of Thrombolysis in Stroke-Monitoring Study (SITS-MOST) data. The model predicts probabilities of haemorrhage, death, independence and dependence at 3-months, with and without thrombolysis, as a function of 13 patient characteristics. Calibration (and inclusion of additional predictors) of the Stroke-Thrombolytic Predictive Instrument (S-TPI) addressed issues of under and over prediction. Validation with VISTA data confirmed that assumptions about treatment effect were just. The C-statistics for independence and death in treated patients in the DAM were 0.793 and 0.771 respectively, and 0.776 for independence in untreated patients from VISTA. We have produced a DAM that provides an estimation of the likely benefits and risks of thrombolysis for individual patients, which has subsequently been embedded in a computerised decision aid to support better decision-making and informed consent.
Testing the Predictive Power of Coulomb Stress on Aftershock Sequences
NASA Astrophysics Data System (ADS)
Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.
2009-12-01
Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.
Peters, Ellen; Hess, Thomas M; Västfjäll, Daniel; Auman, Corinne
2007-03-01
Age differences in affective/experiential and deliberative processes have important theoretical implications for judgment and decision theory and important pragmatic implications for older-adult decision making. Age-related declines in the efficiency of deliberative processes predict poorer-quality decisions as we age. However, age-related adaptive processes, including motivated selectivity in the use of deliberative capacity, an increased focus on emotional goals, and greater experience, predict better or worse decisions for older adults depending on the situation. The aim of the current review is to examine adult age differences in affective and deliberative information processes in order to understand their potential impact on judgments and decisions. We review evidence for the role of these dual processes in judgment and decision making and then review two representative life-span perspectives (based on aging-related changes to cognitive or motivational processes) on the interplay between these processes. We present relevant predictions for older-adult decisions and make note of contradictions and gaps that currently exist in the literature. Finally, we review the sparse evidence about age differences in decision making and how theories and findings regarding dual processes could be applied to decision theory and decision aiding. In particular, we focus on prospect theory (Kahneman & Tversky, 1979) and how prospect theory and theories regarding age differences in information processing can inform one another. © 2007 Association for Psychological Science.
Intuitive statistics by 8-month-old infants
Xu, Fei; Garcia, Vashti
2008-01-01
Human learners make inductive inferences based on small amounts of data: we generalize from samples to populations and vice versa. The academic discipline of statistics formalizes these intuitive statistical inferences. What is the origin of this ability? We report six experiments investigating whether 8-month-old infants are “intuitive statisticians.” Our results showed that, given a sample, the infants were able to make inferences about the population from which the sample had been drawn. Conversely, given information about the entire population of relatively small size, the infants were able to make predictions about the sample. Our findings provide evidence that infants possess a powerful mechanism for inductive learning, either using heuristics or basic principles of probability. This ability to make inferences based on samples or information about the population develops early and in the absence of schooling or explicit teaching. Human infants may be rational learners from very early in development. PMID:18378901
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heo, Yeonsook; Augenbroe, Godfried; Graziano, Diane
2015-05-01
The increasing interest in retrofitting of existing buildings is motivated by the need to make a major contribution to enhancing building energy efficiency and reducing energy consumption and CO2 emission by the built environment. This paper examines the relevance of calibration in model-based analysis to support decision-making for energy and carbon efficiency retrofits of individual buildings and portfolios of buildings. The authors formulate a set of real retrofit decision-making situations and evaluate the role of calibration by using a case study that compares predictions and decisions from an uncalibrated model with those of a calibrated model. The case study illustratesmore » both the mechanics and outcomes of a practical alternative to the expert- and time-intense application of dynamic energy simulation models for large-scale retrofit decision-making under uncertainty.« less
Bayesian averaging over Decision Tree models for trauma severity scoring.
Schetinin, V; Jakaite, L; Krzanowski, W
2018-01-01
Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Feng, Shou; Fu, Ping; Zheng, Wenbin
2018-03-01
Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.
Reinforcement learning in depression: A review of computational research.
Chen, Chong; Takahashi, Taiki; Nakagawa, Shin; Inoue, Takeshi; Kusumi, Ichiro
2015-08-01
Despite being considered primarily a mood disorder, major depressive disorder (MDD) is characterized by cognitive and decision making deficits. Recent research has employed computational models of reinforcement learning (RL) to address these deficits. The computational approach has the advantage in making explicit predictions about learning and behavior, specifying the process parameters of RL, differentiating between model-free and model-based RL, and the computational model-based functional magnetic resonance imaging and electroencephalography. With these merits there has been an emerging field of computational psychiatry and here we review specific studies that focused on MDD. Considerable evidence suggests that MDD is associated with impaired brain signals of reward prediction error and expected value ('wanting'), decreased reward sensitivity ('liking') and/or learning (be it model-free or model-based), etc., although the causality remains unclear. These parameters may serve as valuable intermediate phenotypes of MDD, linking general clinical symptoms to underlying molecular dysfunctions. We believe future computational research at clinical, systems, and cellular/molecular/genetic levels will propel us toward a better understanding of the disease. Copyright © 2015 Elsevier Ltd. All rights reserved.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP
Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740
Besser, Avi; Priel, Beatriz
2011-01-01
This study evaluated the intervening role of meaning-making processes in emotional responses to negative life events based on Blatt's (1974, 2004) formulations concerning the role of personality predispositions in depression. In a pre/post within-subject study design, a community sample of 233 participants reacted to imaginary scenarios of interpersonal rejection and achievement failure. Meaning-making processes relating to threats to self-definition and interpersonal relatedness were examined following the exposure to the scenarios. The results indicated that the personality predisposition of Dependency, but not Self-Criticism predicted higher levels of negative affect following the interpersonal rejection event, independent of baseline levels of negative affect. This effect was mediated by higher levels of negative meaning-making processes related to the effect of the interpersonal rejection scenario on Dependent individuals' senses of interpersonal relatedness and self-worth. In addition, both Self-Criticism and Dependency predicted higher levels of negative affect following the achievement failure event, independent of baseline levels of negative affect. Finally, the effect of Self-Criticism was mediated by higher levels of negative meaning-making processes related to the effect of the achievement failure scenario on self-critical individuals' senses of self-definition.
Meteorology--An Interdisciplinary Base for Science Learning.
ERIC Educational Resources Information Center
Howell, David C.
1980-01-01
Described is a freshman science program at Deerfield Academy (Deerfield, Mass.) in meteorology, designed as the first part of a three-year unified science sequence. Merits of the course, in which particular emphasis is placed on observation skills and making predictions, are enumerated. (CS)
Prototype Abstraction by Monkeys ("Macaca Mulatta")
ERIC Educational Resources Information Center
Smith, J. David; Redford, Joshua S.; Haas, Sarah M.
2008-01-01
The authors analyze the shape categorization of rhesus monkeys ("Macaca mulatta") and the role of prototype- and exemplar-based comparison processes in monkeys' category learning. Prototype and exemplar theories make contrasting predictions regarding performance on the Posner-Homa dot-distortion categorization task. Prototype theory--which…
Alamaniotis, Miltiadis; Agarwal, Vivek
2014-04-01
Anticipatory control systems are a class of systems whose decisions are based on predictions for the future state of the system under monitoring. Anticipation denotes intelligence and is an inherent property of humans that make decisions by projecting in future. Likewise, artificially intelligent systems equipped with predictive functions may be utilized for anticipating future states of complex systems, and therefore facilitate automated control decisions. Anticipatory control of complex energy systems is paramount to their normal and safe operation. In this paper a new intelligent methodology integrating fuzzy inference with support vector regression is introduced. Our proposed methodology implements an anticipatorymore » system aiming at controlling energy systems in a robust way. Initially a set of support vector regressors is adopted for making predictions over critical system parameters. Furthermore, the predicted values are fed into a two stage fuzzy inference system that makes decisions regarding the state of the energy system. The inference system integrates the individual predictions into a single one at its first stage, and outputs a decision together with a certainty factor computed at its second stage. The certainty factor is an index of the significance of the decision. The proposed anticipatory control system is tested on a real world set of data obtained from a complex energy system, describing the degradation of a turbine. Results exhibit the robustness of the proposed system in controlling complex energy systems.« less
Krajbich, Ian; Rangel, Antonio
2011-08-16
How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.
General Formalism of Decision Making Based on Theory of Open Quantum Systems
NASA Astrophysics Data System (ADS)
Asano, M.; Ohya, M.; Basieva, I.; Khrennikov, A.
2013-01-01
We present the general formalism of decision making which is based on the theory of open quantum systems. A person (decision maker), say Alice, is considered as a quantum-like system, i.e., a system which information processing follows the laws of quantum information theory. To make decision, Alice interacts with a huge mental bath. Depending on context of decision making this bath can include her social environment, mass media (TV, newspapers, INTERNET), and memory. Dynamics of an ensemble of such Alices is described by Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation. We speculate that in the processes of evolution biosystems (especially human beings) designed such "mental Hamiltonians" and GKSL-operators that any solution of the corresponding GKSL-equation stabilizes to a diagonal density operator (In the basis of decision making.) This limiting density operator describes population in which all superpositions of possible decisions has already been resolved. In principle, this approach can be used for the prediction of the distribution of possible decisions in human populations.
How Children Use Examples to Make Conditional Predictions
ERIC Educational Resources Information Center
Kalish, Charles W.
2010-01-01
Two experiments explored children's and adults' use of examples to make conditional predictions. In Experiment 1 adults (N = 20) but not 4-year-olds (N = 21) or 8-year-olds (N =1 8) distinguished predictable from unpredictable features when features were partially correlated (e.g., necessary but not sufficient). Children did make reliable…
Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction
Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian
2017-01-01
Abstract Motivation: Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. Results: We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Availability and Implementation: Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. Contact: deane@stats.ox.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28453681
Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.
Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M
2017-05-01
Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Predictive analytics and child protection: constraints and opportunities.
Russell, Jesse
2015-08-01
This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lee, Jaebeom; Lee, Young-Joo
2018-01-01
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421
Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo
2018-05-09
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.
Changes of mind in an attractor network of decision-making.
Albantakis, Larissa; Deco, Gustavo
2011-06-01
Attractor networks successfully account for psychophysical and neurophysiological data in various decision-making tasks. Especially their ability to model persistent activity, a property of many neurons involved in decision-making, distinguishes them from other approaches. Stable decision attractors are, however, counterintuitive to changes of mind. Here we demonstrate that a biophysically-realistic attractor network with spiking neurons, in its itinerant transients towards the choice attractors, can replicate changes of mind observed recently during a two-alternative random-dot motion (RDM) task. Based on the assumption that the brain continues to evaluate available evidence after the initiation of a decision, the network predicts neural activity during changes of mind and accurately simulates reaction times, performance and percentage of changes dependent on difficulty. Moreover, the model suggests a low decision threshold and high incoming activity that drives the brain region involved in the decision-making process into a dynamical regime close to a bifurcation, which up to now lacked evidence for physiological relevance. Thereby, we further affirmed the general conformance of attractor networks with higher level neural processes and offer experimental predictions to distinguish nonlinear attractor from linear diffusion models.
Report to the National Park Service for Permit LAKE-2014-SCI-002
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnley, Pamela C.
The overall purpose of the study is to determine how to use existing geologic data to predict gamma-ray background levels as measured during aerial radiological surveys. Aerial radiological surveys have typically been for resource exploration purposes but are now also used for homeland security purposes and nuclear disaster assessment as well as determining the depth of snowpack. Foreknowledge of the background measured during aerial radiological survey will be valuable for all the above applications. The gamma-ray background comes from the rocks and soil within the first 30 cm of the earth’s surface in the area where the survey is beingmore » made. The background should therefore be predictable based on an understanding of the distribution and geochemistry of the rocks on the surface. We are using a combination of geologic maps, remote sensing imagery and geochemical data from existing databases and the scientific literature to develop a method for predicting gamma-ray backgrounds. As part of this project we have an opportunity to ground truth our technique along a survey calibration line near Lake Mojave that is used by the Remote Sensing Lab (RSL) of National Security Technologies, LLC (NSTec). RSL makes aerial measurements along this line on a regular basis, so the aerial background in the area is well known. By making ground-based measurements of the gamma-ray background and detailed observations of the geology of the ground surface as well as local topography we will have the data we need to make corrections to the models we build based on the remote sensing and geologic data. Our project involves collaborators from the Airborne Geophysics Section of the Geological Survey of Canada as well as from NSTec’s RSL. Findings« less
The prospect of predictive testing for personal risk: attitudes and decision making.
Wroe, A L; Salkovskis, P M; Rimes, K A
1998-06-01
As predictive tests for medical problems such as genetic disorders become more widely available, it becomes increasingly important to understand the processes involved in the decision whether or not to seek testing. This study investigates the decision to pursue the possibility of testing. Individuals (one group who had already contemplated the possibility of predictive testing and one group who had not) were asked to consider predictive testing for several diseases. They rated the likelihood of opting for testing and specified the reasons which they believed had affected their decision. The ratio of the numbers of reasons stated for testing and the numbers of reasons stated against testing was a good predictor of the stated likelihood of testing, particularly when the reasons were weighted by utility (importance). Those who had previously contemplated testing specified more emotional reasons. It is proposed that the decision process is internally logical although it may seem illogical to others due to there being idiosyncratic premises (or reasons) upon which the decision is based. It is concluded that the Utility Theory is a useful basis for describing how people make decisions related to predictive testing; modifications of the theory are proposed.
An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs
Zhu, Zizhong; Wu, Ping; Wu, Shunqing; ...
2017-05-15
An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less
Study on SOC wavelet analysis for LiFePO4 battery
NASA Astrophysics Data System (ADS)
Liu, Xuepeng; Zhao, Dongmei
2017-08-01
Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Ontology-based prediction of surgical events in laparoscopic surgery
NASA Astrophysics Data System (ADS)
Katić, Darko; Wekerle, Anna-Laura; Gärtner, Fabian; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie
2013-03-01
Context-aware technologies have great potential to help surgeons during laparoscopic interventions. Their underlying idea is to create systems which can adapt their assistance functions automatically to the situation in the OR, thus relieving surgeons from the burden of managing computer assisted surgery devices manually. To this purpose, a certain kind of understanding of the current situation in the OR is essential. Beyond that, anticipatory knowledge of incoming events is beneficial, e.g. for early warnings of imminent risk situations. To achieve the goal of predicting surgical events based on previously observed ones, we developed a language to describe surgeries and surgical events using Description Logics and integrated it with methods from computational linguistics. Using n-Grams to compute probabilities of followup events, we are able to make sensible predictions of upcoming events in real-time. The system was evaluated on professionally recorded and labeled surgeries and showed an average prediction rate of 80%.
Forecasting seasonal outbreaks of influenza.
Shaman, Jeffrey; Karspeck, Alicia
2012-12-11
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.
Forecasting seasonal outbreaks of influenza
Shaman, Jeffrey; Karspeck, Alicia
2012-01-01
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969
An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Zizhong; Wu, Ping; Wu, Shunqing
An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less
Yoshizaki, Satoko; Hiraoka, Kyoichi
2015-04-01
The purpose of the present study was to examine the multivariate relations between career exploration and its predictors. University sophomores and seniors completed a questionnaire about career exploration, career decision-making self-efficacy, career decision-making outcome expectations, and career motivation. Canonical correlation analysis showed that combining all predictors, i.e., career decision-making self-efficacy, career decision-making outcome expectations, and career motivations, accounted for a large portion of the career exploration variance. Of subfactors of career motivation, only "integrated and identified regulation" was significantly related to career exploration. This result suggests that career exploration is predicted by self-efficacy as well as a highly self-determinated extrinsic motivation.
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.
Goal-Directed Decision Making with Spiking Neurons
Lengyel, Máté
2016-01-01
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636
Learning about individuals' health from aggregate data.
Colbaugh, Rich; Glass, Kristin
2017-07-01
There is growing awareness that user-generated social media content contains valuable health-related information and is more convenient to collect than typical health data. For example, Twitter has been employed to predict aggregate-level outcomes, such as regional rates of diabetes and child poverty, and to identify individual cases of depression and food poisoning. Models which make aggregate-level inferences can be induced from aggregate data, and consequently are straightforward to build. In contrast, learning models that produce individual-level (IL) predictions, which are more informative, usually requires a large number of difficult-to-acquire labeled IL examples. This paper presents a new machine learning method which achieves the best of both worlds, enabling IL models to be learned from aggregate labels. The algorithm makes predictions by combining unsupervised feature extraction, aggregate-based modeling, and optimal integration of aggregate-level and IL information. Two case studies illustrate how to learn health-relevant IL prediction models using only aggregate labels, and show that these models perform as well as state-of-the-art models trained on hundreds or thousands of labeled individuals.
The role of working memory in inferential sentence comprehension.
Pérez, Ana Isabel; Paolieri, Daniela; Macizo, Pedro; Bajo, Teresa
2014-08-01
Existing literature on inference making is large and varied. Trabasso and Magliano (Discourse Process 21(3):255-287, 1996) proposed the existence of three types of inferences: explicative, associative and predictive. In addition, the authors suggested that these inferences were related to working memory (WM). In the present experiment, we investigated whether WM capacity plays a role in our ability to answer comprehension sentences that require text information based on these types of inferences. Participants with high and low WM span read two narratives with four paragraphs each. After each paragraph was read, they were presented with four true/false comprehension sentences. One required verbatim information and the other three implied explicative, associative and predictive inferential information. Results demonstrated that only the explicative and predictive comprehension sentences required WM: participants with high verbal WM were more accurate in giving explanations and also faster at making predictions relative to participants with low verbal WM span; in contrast, no WM differences were found in the associative comprehension sentences. These results are interpreted in terms of the causal nature underlying these types of inferences.
Individual vision and peak distribution in collective actions
NASA Astrophysics Data System (ADS)
Lu, Peng
2017-06-01
People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.
System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)
2015-01-01
A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.
3P: Personalized Pregnancy Prediction in IVF Treatment Process
NASA Astrophysics Data System (ADS)
Uyar, Asli; Ciray, H. Nadir; Bener, Ayse; Bahceci, Mustafa
We present an intelligent learning system for improving pregnancy success rate of IVF treatment. Our proposed model uses an SVM based classification system for training a model from past data and making predictions on implantation outcome of new embryos. This study employs an embryo-centered approach. Each embryo is represented with a data feature vector including 17 features related to patient characteristics, clinical diagnosis, treatment method and embryo morphological parameters. Our experimental results demonstrate a prediction accuracy of 82.7%. We have obtained the IVF dataset from Bahceci Women Health, Care Centre, in Istanbul, Turkey.
Web-based decision support system to predict risk level of long term rice production
NASA Astrophysics Data System (ADS)
Mukhlash, Imam; Maulidiyah, Ratna; Sutikno; Setiyono, Budi
2017-09-01
Appropriate decision making in risk management of rice production is very important in agricultural planning, especially for Indonesia which is an agricultural country. Good decision would be obtained if the supporting data required are satisfied and using appropriate methods. This study aims to develop a Decision Support System that can be used to predict the risk level of rice production in some districts which are central of rice production in East Java. Web-based decision support system is constructed so that the information can be easily accessed and understood. Components of the system are data management, model management, and user interface. This research uses regression models of OLS and Copula. OLS model used to predict rainfall while Copula model used to predict harvested area. Experimental results show that the models used are successfully predict the harvested area of rice production in some districts which are central of rice production in East Java at any given time based on the conditions and climate of a region. Furthermore, it can predict the amount of rice production with the level of risk. System generates prediction of production risk level in the long term for some districts that can be used as a decision support for the authorities.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kreisel, A.; Nelson, R.; Berlijn, T.; ...
2016-12-27
Since the discovery of iron-based superconductors, a number of theories have been put forward to explain the qualitative origin of pairing, but there have been few attempts to make quantitative, material-specific comparisons to experimental results. The spin-fluctuation theory of electronic pairing, based on first-principles electronic structure calculations, makes predictions for the superconducting gap. Within the same framework, the surface wave functions may also be calculated, allowing, e.g., for detailed comparisons between theoretical results and measured scanning tunneling topographs and spectra. We present such a comparison between theory and experiment on the Fe-based superconductor LiFeAs. Our results for the homogeneous surfacemore » as well as impurity states are presented as a benchmark test of the theory. For the homogeneous system, we argue that the maxima of topographic image intensity may be located at positions above either the As or Li atoms, depending on tip height and the setpoint current of the measurement. We further report the experimental observation of transitions between As- and Li-registered lattices as functions of both tip height and setpoint bias, in agreement with this prediction. Next, we give a detailed comparison between the simulated scanning tunneling microscopy images of transition-metal defects with experiment. Finally, we discuss possible extensions of the current framework to obtain a theory with true predictive power for scanning tunneling microscopy in Fe-based systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreisel, A.; Nelson, R.; Berlijn, T.
Since the discovery of iron-based superconductors, a number of theories have been put forward to explain the qualitative origin of pairing, but there have been few attempts to make quantitative, material-specific comparisons to experimental results. The spin-fluctuation theory of electronic pairing, based on first-principles electronic structure calculations, makes predictions for the superconducting gap. Within the same framework, the surface wave functions may also be calculated, allowing, e.g., for detailed comparisons between theoretical results and measured scanning tunneling topographs and spectra. We present such a comparison between theory and experiment on the Fe-based superconductor LiFeAs. Our results for the homogeneous surfacemore » as well as impurity states are presented as a benchmark test of the theory. For the homogeneous system, we argue that the maxima of topographic image intensity may be located at positions above either the As or Li atoms, depending on tip height and the setpoint current of the measurement. We further report the experimental observation of transitions between As- and Li-registered lattices as functions of both tip height and setpoint bias, in agreement with this prediction. Next, we give a detailed comparison between the simulated scanning tunneling microscopy images of transition-metal defects with experiment. Finally, we discuss possible extensions of the current framework to obtain a theory with true predictive power for scanning tunneling microscopy in Fe-based systems.« less
NASA Astrophysics Data System (ADS)
Kundu, Pradeep; Nath, Tameshwer; Palani, I. A.; Lad, Bhupesh K.
2018-06-01
The present paper tackles an important but unmapped problem of the reliability estimations of smart materials. First, an experimental setup is developed for accelerated life testing of the shape memory alloy (SMA) springs. Generalized log-linear Weibull (GLL-Weibull) distribution-based novel approach is then developed for SMA spring life estimation. Applied stimulus (voltage), elongation and cycles of operation are used as inputs for the life prediction model. The values of the parameter coefficients of the model provide better interpretability compared to artificial intelligence based life prediction approaches. In addition, the model also considers the effect of operating conditions, making it generic for a range of the operating conditions. Moreover, a Bayesian framework is used to continuously update the prediction with the actual degradation value of the springs, thereby reducing the uncertainty in the data and improving the prediction accuracy. In addition, the deterioration of material with number of cycles is also investigated using thermogravimetric analysis and scanning electron microscopy.
Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs
NASA Astrophysics Data System (ADS)
Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan
2016-04-01
Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more accurate measure of forecast uncertainty that could result in better decision-making. It offers different level of abstractions to help with the recalibration of the RAR method. It also has an inspection tool that displays the selected analogs, their observations and statistical data. It gives the users access to inner parts of the method, unveiling hidden information. References [GR05] GNEITING T., RAFTERY A. E.: Weather forecasting with ensemble methods. Science 310, 5746, 248-249, 2005. [KAL03] KALNAY E.: Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2003. [PH06] PALMER T., HAGEDORN R.: Predictability of weather and climate. Cambridge University Press, 2006. [HW06] HAMILL T. M., WHITAKER J. S.: Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Monthly Weather Review 134, 11, 3209-3229, 2006. [DE06] DEITRICK S., EDSALL R.: The influence of uncertainty visualization on decision making: An empirical evaluation. Springer, 2006. [KMS08] KEIM D. A., MANSMANN F., SCHNEIDEWIND J., THOMAS J., ZIEGLER H.: Visual analytics: Scope and challenges. Springer, 2008.
Predicting the Unbeaten Path through Syntactic Priming
ERIC Educational Resources Information Center
Arai, Manabu; Nakamura, Chie; Mazuka, Reiko
2015-01-01
A number of previous studies showed that comprehenders make use of lexically based constraints such as subcategorization frequency in processing structurally ambiguous sentences. One piece of such evidence is lexically specific syntactic priming in comprehension; following the costly processing of a temporarily ambiguous sentence, comprehenders…
Fried, C S; Reppucci, N D
2001-02-01
Theories of judgment in decision making hypothesize that throughout adolescence, judgment is impaired because the development of several psychosocial factors that are presumed to influence decision making lags behind the development of the cognitive capacities that are required to make mature decisions. This study uses an innovative video technique to examine the role of several psychosocial factors--temporal perspective, peer influence, and risk perception--in adolescent criminal decision making. Results based on data collected from 56 adolescents between the ages of 13 and 18 years revealed that detained youth were more likely to think of future-oriented consequences of engaging in the depicted delinquent act and less likely to anticipate pressure from their friends than nondetained youth. Examination of the developmental functions of the psychosocial factors indicates age-based differences on standardized measures of temporal perspective and resistance to peer influence and on measures of the role of risk perception in criminal decision making. Assessments of criminal responsibility and culpability were predicted by age and ethnicity. Implications for punishment in the juvenile justice system are discussed.
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J
2015-06-01
Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct measurements. Moreover, we believe that our approach provides a way to cope with the complexities of different ecosystems, provided that sufficient measurements are taken to calibrate our approach to that of the system being studied. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Barbieri, Christopher E; Cha, Eugene K; Chromecki, Thomas F; Dunning, Allison; Lotan, Yair; Svatek, Robert S; Scherr, Douglas S; Karakiewicz, Pierre I; Sun, Maxine; Mazumdar, Madhu; Shariat, Shahrokh F
2012-03-01
• To employ decision curve analysis to determine the impact of nuclear matrix protein 22 (NMP22) on clinical decision making in the detection of bladder cancer using data from a prospective trial. • The study included 1303 patients at risk for bladder cancer who underwent cystoscopy, urine cytology and measurement of urinary NMP22 levels. • We constructed several prediction models to estimate risk of bladder cancer. The base model was generated using patient characteristics (age, gender, race, smoking and haematuria); cytology and NMP22 were added to the base model to determine effects on predictive accuracy. • Clinical net benefit was calculated by summing the benefits and subtracting the harms and weighting these by the threshold probability at which a patient or clinician would opt for cystoscopy. • In all, 72 patients were found to have bladder cancer (5.5%). In univariate analyses, NMP22 was the strongest predictor of bladder cancer presence (predictive accuracy 71.3%), followed by age (67.5%) and cytology (64.3%). • In multivariable prediction models, NMP22 improved the predictive accuracy of the base model by 8.2% (area under the curve 70.2-78.4%) and of the base model plus cytology by 4.2% (area under the curve 75.9-80.1%). • Decision curve analysis revealed that adding NMP22 to other models increased clinical benefit, particularly at higher threshold probabilities. • NMP22 is a strong, independent predictor of bladder cancer. • Addition of NMP22 improves the accuracy of standard predictors by a statistically and clinically significant margin. • Decision curve analysis suggests that integration of NMP22 into clinical decision making helps avoid unnecessary cystoscopies, with minimal increased risk of missing a cancer. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y; McShan, D; Schipper, M
2014-06-01
Purpose: To develop a decision support tool to predict a patient's potential overall survival (OS) and radiation induced toxicity (RIT) based on clinical factors and responses during the course of radiotherapy, and suggest appropriate radiation dose adjustments to improve therapeutic effect. Methods: Important relationships between a patient's basic information and their clinical features before and during the radiation treatment are identified from historical clinical data by using statistical learning and data mining approaches. During each treatment period, a data analysis (DA) module predicts radiotherapy features such as time to local progression (TTLP), time to distant metastases (TTDM), radiation toxicity tomore » different organs, etc., under possible future treatment plans based on patient specifics or responses. An information fusion (IF) module estimates intervals for a patient's OS and the probabilities of RIT from a treatment plan by integrating the outcomes of module DA. A decision making (DM) module calculates “satisfaction” with the predicted radiation outcome based on trade-offs between OS and RIT, and finds the best treatment plan for the next time period via multi-criteria optimization. Results: Using physical and biological data from 130 lung cancer patients as our test bed, we were able to train and implement the 3 modules of our decision support tool. Examples demonstrate how it can help predict a new patient's potential OS and RIT with different radiation dose plans along with how these combinations change with dose, thus presenting a range of satisfaction/utility for use in individualized decision support. Conclusion: Although the decision support tool is currently developed from a small patient sample size, it shows the potential for the improvement of each patient's satisfaction in personalized radiation therapy. The radiation treatment outcome prediction and decision making model needs to be evaluated with more patients and demonstrated for use in radiation treatments for other cancers. P01-CA59827;R01CA142840.« less
Of mental models, assumptions and heuristics: The case of acids and acid strength
NASA Astrophysics Data System (ADS)
McClary, Lakeisha Michelle
This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Barber, Chris; Cayley, Alex; Hanser, Thierry; Harding, Alex; Heghes, Crina; Vessey, Jonathan D; Werner, Stephane; Weiner, Sandy K; Wichard, Joerg; Giddings, Amanda; Glowienke, Susanne; Parenty, Alexis; Brigo, Alessandro; Spirkl, Hans-Peter; Amberg, Alexander; Kemper, Ray; Greene, Nigel
2016-04-01
The relative wealth of bacterial mutagenicity data available in the public literature means that in silico quantitative/qualitative structure activity relationship (QSAR) systems can readily be built for this endpoint. A good means of evaluating the performance of such systems is to use private unpublished data sets, which generally represent a more distinct chemical space than publicly available test sets and, as a result, provide a greater challenge to the model. However, raw performance metrics should not be the only factor considered when judging this type of software since expert interpretation of the results obtained may allow for further improvements in predictivity. Enough information should be provided by a QSAR to allow the user to make general, scientifically-based arguments in order to assess and overrule predictions when necessary. With all this in mind, we sought to validate the performance of the statistics-based in vitro bacterial mutagenicity prediction system Sarah Nexus (version 1.1) against private test data sets supplied by nine different pharmaceutical companies. The results of these evaluations were then analysed in order to identify findings presented by the model which would be useful for the user to take into consideration when interpreting the results and making their final decision about the mutagenic potential of a given compound. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Curseu, Petru Lucian; Schruijer, Sandra G. L.
2012-01-01
This study investigates the relationship between the five decision-making styles evaluated by the General Decision-Making Style Inventory, indecisiveness, and rationality in decision making. Using a sample of 102 middle-level managers, the results show that the rational style positively predicts rationality in decision making and negatively…
NASA Astrophysics Data System (ADS)
Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.
2016-08-01
Drought is among the costliest natural hazards worldwide and extreme drought events in recent years have caused huge losses to various sectors. Drought prediction is therefore critically important for providing early warning information to aid decision making to cope with drought. Due to the complicated nature of drought, it has been recognized that the univariate drought indicator may not be sufficient for drought characterization and hence multivariate drought indices have been developed for drought monitoring. Alongside the substantial effort in drought monitoring with multivariate drought indices, it is of equal importance to develop a drought prediction method with multivariate drought indices to integrate drought information from various sources. This study proposes a general framework for multivariate multi-index drought prediction that is capable of integrating complementary prediction skills from multiple drought indices. The Multivariate Ensemble Streamflow Prediction (MESP) is employed to sample from historical records for obtaining statistical prediction of multiple variables, which is then used as inputs to achieve multivariate prediction. The framework is illustrated with a linearly combined drought index (LDI), which is a commonly used multivariate drought index, based on climate division data in California and New York in the United States with different seasonality of precipitation. The predictive skill of LDI (represented with persistence) is assessed by comparison with the univariate drought index and results show that the LDI prediction skill is less affected by seasonality than the meteorological drought prediction based on SPI. Prediction results from the case study show that the proposed multivariate drought prediction outperforms the persistence prediction, implying a satisfactory performance of multivariate drought prediction. The proposed method would be useful for drought prediction to integrate drought information from various sources for early drought warning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C.
Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach tomore » trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (p < 0.0001). For the 7 residents only, the AUC performance of the models was 0.590 (95% CI,0.537-0.642) and was also significantly higher than 0.5 (p = 0.0009). Therefore, generally the authors’ models were able to predict which masses were detected and which were missed better than chance. Conclusions: The authors proposed an algorithm that was able to predict which masses will be detected and which will be missed by each individual trainee. This confirms existence of error-making patterns in the detection of masses among radiology trainees. Furthermore, the proposed methodology will allow for the optimized selection of difficult cases for the trainees in an automatic and efficient manner.« less
Predictability of Seasonal Rainfall over the Greater Horn of Africa
NASA Astrophysics Data System (ADS)
Ngaina, J. N.
2016-12-01
The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions
Ouari, Kamel; Rekioua, Toufik; Ouhrouche, Mohand
2014-01-01
In order to make a wind power generation truly cost-effective and reliable, an advanced control techniques must be used. In this paper, we develop a new control strategy, using nonlinear generalized predictive control (NGPC) approach, for DFIG-based wind turbine. The proposed control law is based on two points: NGPC-based torque-current control loop generating the rotor reference voltage and NGPC-based speed control loop that provides the torque reference. In order to enhance the robustness of the controller, a disturbance observer is designed to estimate the aerodynamic torque which is considered as an unknown perturbation. Finally, a real-time simulation is carried out to illustrate the performance of the proposed controller. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Smart strategies for doctors and doctors-in-training: heuristics in medicine.
Wegwarth, Odette; Gaissmaier, Wolfgang; Gigerenzer, Gerd
2009-08-01
How do doctors make sound decisions when confronted with probabilistic data, time pressures and a heavy workload? One theory that has been embraced by many researchers is based on optimisation, which emphasises the need to integrate all information in order to arrive at sound decisions. This notion makes heuristics, which use less than complete information, appear as second-best strategies. In this article, we challenge this pessimistic view of heuristics. We introduce two medical problems that involve decision making to the reader: one concerns coronary care issues and the other macrolide prescriptions. In both settings, decision-making tools grounded in the principles of optimisation and heuristics, respectively, have been developed to assist doctors in making decisions. We explain the structure of each of these tools and compare their performance in terms of their facilitation of correct predictions. For decisions concerning both the coronary care unit and the prescribing of macrolides, we demonstrate that sacrificing information does not necessarily imply a forfeiting of predictive accuracy, but can sometimes even lead to better decisions. Subsequently, we discuss common misconceptions about heuristics and explain when and why ignoring parts of the available information can lead to the making of more robust predictions. Heuristics are neither good nor bad per se, but, if applied in situations to which they have been adapted, can be helpful companions for doctors and doctors-in-training. This, however, requires that heuristics in medicine be openly discussed, criticised, refined and then taught to doctors-in-training rather than being simply dismissed as harmful or irrelevant. A more uniform use of explicit and accepted heuristics has the potential to reduce variations in diagnoses and to improve medical care for patients.
McDermott, Jason E.; Bruillard, Paul; Overall, Christopher C.; ...
2015-03-09
There are many examples of groups of proteins that have similar function, but the determinants of functional specificity may be hidden by lack of sequencesimilarity, or by large groups of similar sequences with different functions. Transporters are one such protein group in that the general function, transport, can be easily inferred from the sequence, but the substrate specificity can be impossible to predict from sequence with current methods. In this paper we describe a linguistic-based approach to identify functional patterns from groups of unaligned protein sequences and its application to predict multi-drug resistance transporters (MDRs) from bacteria. We first showmore » that our method can recreate known patterns from PROSITE for several motifs from unaligned sequences. We then show that the method, MDRpred, can predict MDRs with greater accuracy and positive predictive value than a collection of currently available family-based models from the Pfam database. Finally, we apply MDRpred to a large collection of protein sequences from an environmental microbiome study to make novel predictions about drug resistance in a potential environmental reservoir.« less
How can we best respect patient autonomy in breast cancer treatment decisions?
Martinez, Kathryn A; Kurian, Allison W
2015-01-01
SUMMARY Helping patients to maximize their autonomy in breast cancer decision-making is an important aspect of patient-centered care. Shared decision-making is a strategy that aims to maximize patient autonomy by integrating the values and preferences of the patient with the biomedical expertise of the physician. Application of this approach in breast cancer decision-making has not been uniform across cancer-specific interventions (e.g., surgery, chemotherapy), and in some circumstances may present challenges to evidence-based care delivery. Increasingly precise estimates of individual patients’ risk of recurrence and commensurate predicted benefit from certain therapies hold significant promise in helping patients exercise autonomous decision-making for their breast cancer care, yet will also likely complicate decision-making for certain subgroups of patients. PMID:25733982
A probabilistic, distributed, recursive mechanism for decision-making in the brain
Gurney, Kevin N.
2018-01-01
Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. Here we characterize its essential composition, using as a framework a novel recursive Bayesian algorithm that makes decisions based on spike-trains with the statistics of those in sensory cortex (MT). Using it to simulate the random-dot-motion task, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of otherwise usable information from MT. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensorimotor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive, and parallel. Its success at capturing anatomy, behaviour, and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics. PMID:29614077
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
MED: a new non-supervised gene prediction algorithm for bacterial and archaeal genomes.
Zhu, Huaiqiu; Hu, Gang-Qing; Yang, Yi-Fan; Wang, Jin; She, Zhen-Su
2007-03-16
Despite a remarkable success in the computational prediction of genes in Bacteria and Archaea, a lack of comprehensive understanding of prokaryotic gene structures prevents from further elucidation of differences among genomes. It continues to be interesting to develop new ab initio algorithms which not only accurately predict genes, but also facilitate comparative studies of prokaryotic genomes. This paper describes a new prokaryotic genefinding algorithm based on a comprehensive statistical model of protein coding Open Reading Frames (ORFs) and Translation Initiation Sites (TISs). The former is based on a linguistic "Entropy Density Profile" (EDP) model of coding DNA sequence and the latter comprises several relevant features related to the translation initiation. They are combined to form a so-called Multivariate Entropy Distance (MED) algorithm, MED 2.0, that incorporates several strategies in the iterative program. The iterations enable us to develop a non-supervised learning process and to obtain a set of genome-specific parameters for the gene structure, before making the prediction of genes. Results of extensive tests show that MED 2.0 achieves a competitive high performance in the gene prediction for both 5' and 3' end matches, compared to the current best prokaryotic gene finders. The advantage of the MED 2.0 is particularly evident for GC-rich genomes and archaeal genomes. Furthermore, the genome-specific parameters given by MED 2.0 match with the current understanding of prokaryotic genomes and may serve as tools for comparative genomic studies. In particular, MED 2.0 is shown to reveal divergent translation initiation mechanisms in archaeal genomes while making a more accurate prediction of TISs compared to the existing gene finders and the current GenBank annotation.
A fuzzy set preference model for market share analysis
NASA Technical Reports Server (NTRS)
Turksen, I. B.; Willson, Ian A.
1992-01-01
Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).
Jin, Haomiao; Wu, Shinyi; Di Capua, Paul
2015-09-03
Depression is a common but often undiagnosed comorbid condition of people with diabetes. Mass screening can detect undiagnosed depression but may require significant resources and time. The objectives of this study were 1) to develop a clinical forecasting model that predicts comorbid depression among patients with diabetes and 2) to evaluate a model-based screening policy that saves resources and time by screening only patients considered as depressed by the clinical forecasting model. We trained and validated 4 machine learning models by using data from 2 safety-net clinical trials; we chose the one with the best overall predictive ability as the ultimate model. We compared model-based policy with alternative policies, including mass screening and partial screening, on the basis of depression history or diabetes severity. Logistic regression had the best overall predictive ability of the 4 models evaluated and was chosen as the ultimate forecasting model. Compared with mass screening, the model-based policy can save approximately 50% to 60% of provider resources and time but will miss identifying about 30% of patients with depression. Partial-screening policy based on depression history alone found only a low rate of depression. Two other heuristic-based partial screening policies identified depression at rates similar to those of the model-based policy but cost more in resources and time. The depression prediction model developed in this study has compelling predictive ability. By adopting the model-based depression screening policy, health care providers can use their resources and time better and increase their efficiency in managing their patients with depression.
Prognostics for Ground Support Systems: Case Study on Pneumatic Valves
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai
2011-01-01
Prognostics technologies determine the health (or damage) state of a component or sub-system, and make end of life (EOL) and remaining useful life (RUL) predictions. Such information enables system operators to make informed maintenance decisions and streamline operational and mission-level activities. We develop a model-based prognostics methodology for pneumatic valves used in ground support equipment for cryogenic propellant loading operations. These valves are used to control the flow of propellant, so failures may have a significant impact on launch availability. Therefore, correctly predicting when valves will fail enables timely maintenance that avoids launch delays and aborts. The approach utilizes mathematical models describing the underlying physics of valve degradation, and, employing the particle filtering algorithm for joint state-parameter estimation, determines the health state of the valve and the rate of damage progression, from which EOL and RUL predictions are made. We develop a prototype user interface for valve prognostics, and demonstrate the prognostics approach using historical pneumatic valve data from the Space Shuttle refueling system.
The effect of analytic and experiential modes of thought on moral judgment.
Kvaran, Trevor; Nichols, Shaun; Sanfey, Alan
2013-01-01
According to dual-process theories, moral judgments are the result of two competing processes: a fast, automatic, affect-driven process and a slow, deliberative, reason-based process. Accordingly, these models make clear and testable predictions about the influence of each system. Although a small number of studies have attempted to examine each process independently in the context of moral judgment, no study has yet tried to experimentally manipulate both processes within a single study. In this chapter, a well-established "mode-of-thought" priming technique was used to place participants in either an experiential/emotional or analytic mode while completing a task in which participants provide judgments about a series of moral dilemmas. We predicted that individuals primed analytically would make more utilitarian responses than control participants, while emotional priming would lead to less utilitarian responses. Support was found for both of these predictions. Implications of these findings for dual-process theories of moral judgment will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo
Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.
Loeffert, Sabine; Ommen, Oliver; Kuch, Christine; Scheibler, Fueloep; Woehrmann, Andrej; Baldamus, Conrad; Pfaff, Holger
2010-09-11
Numerous studies examined factors in promoting a patient preference for active participation in treatment decision making with only modest success. The purpose of this study was to identify types of patients wishing to participate in treatment decisions as well as those wishing to play a completely active or passive role based on a Germany-wide survey of dialysis patients; using a prediction typal analysis method that defines types as configurations of categories belonging to different attributes and takes particularly higher order interactions between variables into account. After randomly splitting the original patient sample into two halves, an exploratory prediction configural frequency analysis (CFA) was performed on one-half of the sample (n = 1969) and the identified types were considered as hypotheses for an inferential prediction CFA for the second half (n = 1914). 144 possible prediction types were tested by using five predictor variables and control preferences as criterion. An α-adjustment (0.05) for multiple testing was performed by the Holm procedure. 21 possible prediction types were identified as hypotheses in the exploratory prediction CFA; four patient types were confirmed in the confirmatory prediction CFA: patients preferring a passive role show low information seeking preference, above average trust in their physician, perceive their physician's participatory decision-making (PDM)-style positive, have a lower educational level, and are 56-75 years old (Type 1; p < 0.001) or > 76 years old (Type 2; p < 0.001). Patients preferring an active role show high information seeking preference, a higher educational level, and are < 55 years old. They have either below average trust, perceive the PDM-style negative (Type 3; p < 0.001) or above average trust and perceive the PDM-style positive (Type 4; p < 0.001). The method prediction configural frequency analysis was newly introduced to the research field of patient participation and could demonstrate how a particular control preference role is determined by an association of five variables.
Sebold, Miriam; Nebe, Stephan; Garbusow, Maria; Guggenmos, Matthias; Schad, Daniel J; Beck, Anne; Kuitunen-Paul, Soeren; Sommer, Christian; Frank, Robin; Neu, Peter; Zimmermann, Ulrich S; Rapp, Michael A; Smolka, Michael N; Huys, Quentin J M; Schlagenhauf, Florian; Heinz, Andreas
2017-12-01
Addiction is supposedly characterized by a shift from goal-directed to habitual decision making, thus facilitating automatic drug intake. The two-step task allows distinguishing between these mechanisms by computationally modeling goal-directed and habitual behavior as model-based and model-free control. In addicted patients, decision making may also strongly depend upon drug-associated expectations. Therefore, we investigated model-based versus model-free decision making and its neural correlates as well as alcohol expectancies in alcohol-dependent patients and healthy controls and assessed treatment outcome in patients. Ninety detoxified, medication-free, alcohol-dependent patients and 96 age- and gender-matched control subjects underwent functional magnetic resonance imaging during the two-step task. Alcohol expectancies were measured with the Alcohol Expectancy Questionnaire. Over a follow-up period of 48 weeks, 37 patients remained abstinent and 53 patients relapsed as indicated by the Alcohol Timeline Followback method. Patients who relapsed displayed reduced medial prefrontal cortex activation during model-based decision making. Furthermore, high alcohol expectancies were associated with low model-based control in relapsers, while the opposite was observed in abstainers and healthy control subjects. However, reduced model-based control per se was not associated with subsequent relapse. These findings suggest that poor treatment outcome in alcohol dependence does not simply result from a shift from model-based to model-free control but is instead dependent on the interaction between high drug expectancies and low model-based decision making. Reduced model-based medial prefrontal cortex signatures in those who relapse point to a neural correlate of relapse risk. These observations suggest that therapeutic interventions should target subjective alcohol expectancies. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Sakoda, Lori C; Henderson, Louise M; Caverly, Tanner J; Wernli, Karen J; Katki, Hormuzd A
2017-12-01
Risk prediction models may be useful for facilitating effective and high-quality decision-making at critical steps in the lung cancer screening process. This review provides a current overview of published lung cancer risk prediction models and their applications to lung cancer screening and highlights both challenges and strategies for improving their predictive performance and use in clinical practice. Since the 2011 publication of the National Lung Screening Trial results, numerous prediction models have been proposed to estimate the probability of developing or dying from lung cancer or the probability that a pulmonary nodule is malignant. Respective models appear to exhibit high discriminatory accuracy in identifying individuals at highest risk of lung cancer or differentiating malignant from benign pulmonary nodules. However, validation and critical comparison of the performance of these models in independent populations are limited. Little is also known about the extent to which risk prediction models are being applied in clinical practice and influencing decision-making processes and outcomes related to lung cancer screening. Current evidence is insufficient to determine which lung cancer risk prediction models are most clinically useful and how to best implement their use to optimize screening effectiveness and quality. To address these knowledge gaps, future research should be directed toward validating and enhancing existing risk prediction models for lung cancer and evaluating the application of model-based risk calculators and its corresponding impact on screening processes and outcomes.
Predicting neuroblastoma using developmental signals and a logic-based model.
Kasemeier-Kulesa, Jennifer C; Schnell, Santiago; Woolley, Thomas; Spengler, Jennifer A; Morrison, Jason A; McKinney, Mary C; Pushel, Irina; Wolfe, Lauren A; Kulesa, Paul M
2018-07-01
Genomic information from human patient samples of pediatric neuroblastoma cancers and known outcomes have led to specific gene lists put forward as high risk for disease progression. However, the reliance on gene expression correlations rather than mechanistic insight has shown limited potential and suggests a critical need for molecular network models that better predict neuroblastoma progression. In this study, we construct and simulate a molecular network of developmental genes and downstream signals in a 6-gene input logic model that predicts a favorable/unfavorable outcome based on the outcome of the four cell states including cell differentiation, proliferation, apoptosis, and angiogenesis. We simulate the mis-expression of the tyrosine receptor kinases, trkA and trkB, two prognostic indicators of neuroblastoma, and find differences in the number and probability distribution of steady state outcomes. We validate the mechanistic model assumptions using RNAseq of the SHSY5Y human neuroblastoma cell line to define the input states and confirm the predicted outcome with antibody staining. Lastly, we apply input gene signatures from 77 published human patient samples and show that our model makes more accurate disease outcome predictions for early stage disease than any current neuroblastoma gene list. These findings highlight the predictive strength of a logic-based model based on developmental genes and offer a better understanding of the molecular network interactions during neuroblastoma disease progression. Copyright © 2018. Published by Elsevier B.V.
Mwangi, Benson; Ebmeier, Klaus P; Matthews, Keith; Steele, J Douglas
2012-05-01
Quantitative abnormalities of brain structure in patients with major depressive disorder have been reported at a group level for decades. However, these structural differences appear subtle in comparison with conventional radiologically defined abnormalities, with considerable inter-subject variability. Consequently, it has not been possible to readily identify scans from patients with major depressive disorder at an individual level. Recently, machine learning techniques such as relevance vector machines and support vector machines have been applied to predictive classification of individual scans with variable success. Here we describe a novel hybrid method, which combines machine learning with feature selection and characterization, with the latter aimed at maximizing the accuracy of machine learning prediction. The method was tested using a multi-centre dataset of T(1)-weighted 'structural' scans. A total of 62 patients with major depressive disorder and matched controls were recruited from referred secondary care clinical populations in Aberdeen and Edinburgh, UK. The generalization ability and predictive accuracy of the classifiers was tested using data left out of the training process. High prediction accuracy was achieved (~90%). While feature selection was important for maximizing high predictive accuracy with machine learning, feature characterization contributed only a modest improvement to relevance vector machine-based prediction (~5%). Notably, while the only information provided for training the classifiers was T(1)-weighted scans plus a categorical label (major depressive disorder versus controls), both relevance vector machine and support vector machine 'weighting factors' (used for making predictions) correlated strongly with subjective ratings of illness severity. These results indicate that machine learning techniques have the potential to inform clinical practice and research, as they can make accurate predictions about brain scan data from individual subjects. Furthermore, machine learning weighting factors may reflect an objective biomarker of major depressive disorder illness severity, based on abnormalities of brain structure.
NASA Astrophysics Data System (ADS)
Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel
2016-11-01
This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.
Wiswell, Jeffrey; Tsao, Kenyon; Bellolio, M Fernanda; Hess, Erik P; Cabrera, Daniel
2013-10-01
System 1 decision-making is fast, resource economic, and intuitive (eg, "your gut feeling") and System 2 is slow, resource intensive, and analytic (eg, "hypothetico-deductive"). We evaluated the performance of disposition and acuity prediction by emergency physicians (EPs) using a System 1 decision-making process. We conducted a prospective observational study of attending EPs and emergency medicine residents. Physicians were provided patient demographics, chief complaint, and vital sign data and made two assessments on initial presentation: (1) likely disposition (discharge vs admission) and (2) "sick" vs "not-sick". A patient was adjudicated as sick if he/she had a disease process that was potentially life or limb threatening based on pre-defined operational, financial, or educationally derived criteria. We obtained 266 observations in 178 different patients. Physicians predicted patient disposition with the following performance: sensitivity 87.7% (95% CI 81.4-92.1), specificity 65.0% (95% CI 56.1-72.9), LR+ 2.51 (95% CI 1.95-3.22), LR- 0.19 (95% CI 0.12-0.30). For the sick vs not-sick assessment, providers had the following performance: sensitivity 66.2% (95% CI 55.1-75.8), specificity 88.4% (95% CI 83.0-92.2), LR+ 5.69 (95% CI 3.72-8.69), LR- 0.38 (95% CI 0.28-0.53). EPs are able to accurately predict the disposition of ED patients using system 1 diagnostic reasoning based on minimal available information. However, the prognostic accuracy of acuity prediction was limited. © 2013.
Visual prediction and perceptual expertise
Cheung, Olivia S.; Bar, Moshe
2012-01-01
Making accurate predictions about what may happen in the environment requires analogies between perceptual input and associations in memory. These elements of predictions are based on cortical representations, but little is known about how these processes can be enhanced by experience and training. On the other hand, studies on perceptual expertise have revealed that the acquisition of expertise leads to strengthened associative processing among features or objects, suggesting that predictions and expertise may be tightly connected. Here we review the behavioral and neural findings regarding the mechanisms involving prediction and expert processing, and highlight important possible overlaps between them. Future investigation should examine the relations among perception, memory and prediction skills as a function of expertise. The knowledge gained by this line of research will have implications for visual cognition research, and will advance our understanding of how the human brain can improve its ability to predict by learning from experience. PMID:22123523
Performance predictions affect attentional processes of event-based prospective memory.
Rummel, Jan; Kuhlmann, Beatrice G; Touron, Dayna R
2013-09-01
To investigate whether making performance predictions affects prospective memory (PM) processing, we asked one group of participants to predict their performance in a PM task embedded in an ongoing task and compared their performance with a control group that made no predictions. A third group gave not only PM predictions but also ongoing-task predictions. Exclusive PM predictions resulted in slower ongoing-task responding both in a nonfocal (Experiment 1) and in a focal (Experiment 2) PM task. Only in the nonfocal task was the additional slowing accompanied by improved PM performance. Even in the nonfocal task, however, was the correlation between ongoing-task speed and PM performance reduced after predictions, suggesting that the slowing was not completely functional for PM. Prediction-induced changes could be avoided by asking participants to additionally predict their performance in the ongoing task. In sum, the present findings substantiate a role of metamemory for attention-allocation strategies of PM. Copyright © 2013 Elsevier Inc. All rights reserved.
Nealon, John Oliver; Philomina, Limcy Seby
2017-01-01
The elucidation of protein–protein interactions is vital for determining the function and action of quaternary protein structures. Here, we discuss the difficulty and importance of establishing protein quaternary structure and review in vitro and in silico methods for doing so. Determining the interacting partner proteins of predicted protein structures is very time-consuming when using in vitro methods, this can be somewhat alleviated by use of predictive methods. However, developing reliably accurate predictive tools has proved to be difficult. We review the current state of the art in predictive protein interaction software and discuss the problem of scoring and therefore ranking predictions. Current community-based predictive exercises are discussed in relation to the growth of protein interaction prediction as an area within these exercises. We suggest a fusion of experimental and predictive methods that make use of sparse experimental data to determine higher resolution predicted protein interactions as being necessary to drive forward development. PMID:29206185
A Market-Basket Approach to Predict the Acute Aquatic Toxicity of Munitions and Energetic Materials.
Burgoon, Lyle D
2016-06-01
An ongoing challenge in chemical production, including the production of insensitive munitions and energetics, is the ability to make predictions about potential environmental hazards early in the process. To address this challenge, a quantitative structure activity relationship model was developed to predict acute fathead minnow toxicity of insensitive munitions and energetic materials. Computational predictive toxicology models like this one may be used to identify and prioritize environmentally safer materials early in their development. The developed model is based on the Apriori market-basket/frequent itemset mining approach to identify probabilistic prediction rules using chemical atom-pairs and the lethality data for 57 compounds from a fathead minnow acute toxicity assay. Lethality data were discretized into four categories based on the Globally Harmonized System of Classification and Labelling of Chemicals. Apriori identified toxicophores for categories two and three. The model classified 32 of the 57 compounds correctly, with a fivefold cross-validation classification rate of 74 %. A structure-based surrogate approach classified the remaining 25 chemicals correctly at 48 %. This result is unsurprising as these 25 chemicals were fairly unique within the larger set.
HEPEX - achievements and challenges!
NASA Astrophysics Data System (ADS)
Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan
2014-05-01
HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.
NASA Astrophysics Data System (ADS)
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.
Designing Tools for Supporting User Decision-Making in e-Commerce
NASA Astrophysics Data System (ADS)
Sutcliffe, Alistair; Al-Qaed, Faisal
The paper describes a set of tools designed to support a variety of user decision-making strategies. The tools are complemented by an online advisor so they can be adapted to different domains and users can be guided to adopt appropriate tools for different choices in e-commerce, e.g. purchasing high-value products, exploring product fit to users’ needs, or selecting products which satisfy requirements. The tools range from simple recommenders to decision support by interactive querying and comparison matrices. They were evaluated in a scenario-based experiment which varied the users’ task and motivation, with and without an advisor agent. The results show the tools and advisor were effective in supporting users and agreed with the predictions of ADM (adaptive decision making) theory, on which the design of the tools was based.
Neuroeconomics: cross-currents in research on decision-making.
Sanfey, Alan G; Loewenstein, George; McClure, Samuel M; Cohen, Jonathan D
2006-03-01
Despite substantial advances, the question of how we make decisions and judgments continues to pose important challenges for scientific research. Historically, different disciplines have approached this problem using different techniques and assumptions, with few unifying efforts made. However, the field of neuroeconomics has recently emerged as an inter-disciplinary effort to bridge this gap. Research in neuroscience and psychology has begun to investigate neural bases of decision predictability and value, central parameters in the economic theory of expected utility. Economics, in turn, is being increasingly influenced by a multiple-systems approach to decision-making, a perspective strongly rooted in psychology and neuroscience. The integration of these disparate theoretical approaches and methodologies offers exciting potential for the construction of more accurate models of decision-making.
Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths
NASA Astrophysics Data System (ADS)
Drotleff, K.; Liewald, M.
2017-09-01
Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.
Drugs and Crime: An Empirically Based, Interdisciplinary Model
ERIC Educational Resources Information Center
Quinn, James F.; Sneed, Zach
2008-01-01
This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…
Concern surrounding the potential adverse impacts of pesticides to honey bee colonies has led to the need for rapid/cost efficient methods for aiding decision making relative to the protection of this important pollinator species. Neonicotinoids represent a class of pesticides th...
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
A Guide to Personal, Business and Social Forecasting and Survival.
ERIC Educational Resources Information Center
Loye, David
1983-01-01
By thinking about and trying to predict the future, we force ourselves to articulate our feelings and thoughts about that future. A technique using intuitive and logical thinking based on right brain-left brain differences is proposed to aid in decision making by both groups and individuals. (Author/IS)
Confidence Testing for Knowledge-Based Global Communities
ERIC Educational Resources Information Center
Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.
2009-01-01
This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…
Adolescents' Contribution to Household Production: Male and Female Differences.
ERIC Educational Resources Information Center
Sanik, Margaret Mietus; Stafford, Kathryn
1985-01-01
Develops a model to predict the contribution adolescent males and females make to household work, based upon family characteristics, human capital of the adolescent, geographic location, and societal expectations. Adolescent females worked longer than males, regardless of birth order. Time use for household work was largely unaffected by family…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Argibay, Nicolas; Cheng, Shengfeng; Sawyer, W. G.
2015-09-01
The prediction of macro-scale friction and wear behavior based on first principles and material properties has remained an elusive but highly desirable target for tribologists and material scientists alike. Stochastic processes (e.g. wear), statistically described parameters (e.g. surface topography) and their evolution tend to defeat attempts to establish practical general correlations between fundamental nanoscale processes and macro-scale behaviors. We present a model based on microstructural stability and evolution for the prediction of metal friction regimes, founded on recently established microstructural deformation mechanisms of nanocrystalline metals, that relies exclusively on material properties and contact stress models. We show through complementary experimentalmore » and simulation results that this model overcomes longstanding practical challenges and successfully makes accurate and consistent predictions of friction transitions for a wide range of contact conditions. This framework not only challenges the assumptions of conventional causal relationships between hardness and friction, and between friction and wear, but also suggests a pathway for the design of higher performance metal alloys.« less
Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale
NASA Astrophysics Data System (ADS)
Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang
2017-12-01
The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.
Birkley, Erica L.; Smith, Gregory T.
2013-01-01
Impulsivity has been a widely explored construct, particularly as a personality-based risk factor for addictive behaviors. The authors review evidence that (a) there is no single impulsivity trait; rather, there are at least five different personality traits that dispose individuals to rash or impulsive action; (b) the five traits predict different behaviors longitudinally; for example, the emotion-based urgency traits predict problematic involvement in several risky behaviors and sensation seeking instead predicts the frequency of engaging in such behaviors; (c) the traits can be measured in pre-adolescent children; (d) individual differences in the traits among preadolescent children predict the subsequent onset of, and increases in, risky behaviors including alcohol use; (e) the traits may operate by biasing the learning process, such that high-risk traits make high-risk learning more likely, thus leading to maladaptive behavior; (f) the emotion-based urgency traits may contribute to compulsive engagement in addictive behaviors; and (g) there is evidence that different interventions are appropriate for the different trait structures. PMID:22126707
NASA Astrophysics Data System (ADS)
Liu, Z.; LU, G.; He, H.; Wu, Z.; He, J.
2017-12-01
Reliable drought prediction is fundamental for seasonal water management. Considering that drought development is closely related to the spatio-temporal evolution of large-scale circulation patterns, we develop a conceptual prediction model of seasonal drought processes based on atmospheric/oceanic Standardized Anomalies (SA). It is essentially the synchronous stepwise regression relationship between 90-day-accumulated atmospheric/oceanic SA-based predictors and 3-month SPI updated daily (SPI3). It is forced with forecasted atmospheric and oceanic variables retrieved from seasonal climate forecast systems, and it can make seamless drought prediction for operational use after a year-to-year calibration. Simulation and prediction of four severe seasonal regional drought processes in China were forced with the NCEP/NCAR reanalysis datasets and the NCEP Climate Forecast System Version 2 (CFSv2) operationally forecasted datasets, respectively. With the help of real-time correction for operational application, model application during four recent severe regional drought events in China revealed that the model is good at development prediction but weak in severity prediction. In addition to weakness in prediction of drought peak, the prediction of drought relief is possible to be predicted as drought recession. This weak performance may be associated with precipitation-causing weather patterns during drought relief. Based on initial virtual analysis on predicted 90-day prospective SPI3 curves, it shows that the 2009/2010 drought in Southwest China and 2014 drought in North China can be predicted and simulated well even for the prospective 1-75 day. In comparison, the prospective 1-45 day may be a feasible and acceptable lead time for simulation and prediction of the 2011 droughts in Southwest China and East China, after which the simulated and predicted developments clearly change.
NASA Astrophysics Data System (ADS)
Thompson, S. E.; Sivapalan, M.; Harman, C. J.; Srinivasan, V.; Hipsey, M. R.; Reed, P.; Montanari, A.; Blöschl, G.
2013-06-01
Globally, many different kinds of water resources management issues call for policy and infrastructure based responses. Yet responsible decision making about water resources management raises a fundamental challenge for hydrologists: making predictions about water resources on decadal-to-century long timescales. Obtaining insight into hydrologic futures over 100 yr timescales forces researchers to address internal and exogenous changes in the properties of hydrologic systems. To do this, new hydrologic research must identify, describe and model feedbacks between water and other changing, coupled environmental subsystems. These models must be constrained to yield useful insights, despite the many likely sources of uncertainty in their predictions. Chief among these uncertainties are the impacts of the increasing role of human intervention in the global water cycle - a defining challenge for hydrology in the Anthropocene. Here we present a research agenda that proposes a suite of strategies to address these challenges. The research agenda focuses on the development of co-evolutionary hydrologic modeling to explore coupling across systems, and to address the implications of this coupling on the long-time behavior of the coupled systems. Three research directions support the development of these models: hydrologic reconstruction, comparative hydrology and model-data learning. These strategies focus on understanding hydrologic processes and feedbacks over long timescales, across many locations, and through strategic coupling of observational and model data in specific systems. We highlight the value of use-inspired and team-based science that is motivated by real-world hydrologic problems but targets improvements in fundamental understanding to support decision-making and management.
Preferences of acutely ill patients for participation in medical decision-making.
Wilkinson, C; Khanji, M; Cotter, P E; Dunne, O; O'Keeffe, S T
2008-04-01
To determine patient preferences for information and for participation in decision-making, and the determinants of these preferences in patients recently admitted to an acute hospital. Prospective questionnaire-based study. Medical wards of an acute teaching hospital. One hundred and fifty-two consecutive acute medical inpatients, median age 74 years. Standardised assessment included abbreviated mental test and subjective measure of severity of illness. Patients' desire for information was assessed using a 5-point Likert scale, and their desire for a role in medical decision-making using the Degner Control of Preferences Scale. Of the 152 patients, 93 (61%) favoured a passive approach to decision-making (either "leave all decisions to the doctor" or "doctor makes final decision but seriously considers my opinion." In contrast, 101 (66%) patients sought "very extensive" or "a lot" of information about their condition. No significant effects of age, sex, socio-economic group or severity of acute illness on desire for information or the Degner scale result were found. There was no agreement between patients' preferences on the Degner scale and their doctors' predictions of those preferences. Acute medical inpatients want to receive a lot of information about their illness, but most prefer a relatively passive role in decision-making. The only way to determine individual patient preferences is to ask them; preferences cannot be predicted from clinical or sociodemographic data.
Ads' click-through rates predicting based on gated recurrent unit neural networks
NASA Astrophysics Data System (ADS)
Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi
2018-05-01
In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.
Research on the Wire Network Signal Prediction Based on the Improved NNARX Model
NASA Astrophysics Data System (ADS)
Zhang, Zipeng; Fan, Tao; Wang, Shuqing
It is difficult to obtain accurately the wire net signal of power system's high voltage power transmission lines in the process of monitoring and repairing. In order to solve this problem, the signal measured in remote substation or laboratory is employed to make multipoint prediction to gain the needed data. But, the obtained power grid frequency signal is delay. In order to solve the problem, an improved NNARX network which can predict frequency signal based on multi-point data collected by remote substation PMU is describes in this paper. As the error curved surface of the NNARX network is more complicated, this paper uses L-M algorithm to train the network. The result of the simulation shows that the NNARX network has preferable predication performance which provides accurate real time data for field testing and maintenance.
NASA Astrophysics Data System (ADS)
Rajapakse, G.; Jayasinghe, S. G.; Fleming, A.; Shahnia, F.
2017-07-01
Australia’s extended coastline asserts abundance of wave and tidal power. The predictability of these energy sources and their proximity to cities and towns make them more desirable. Several tidal current turbine and ocean wave energy conversion projects have already been planned in the coastline of southern Australia. Some of these projects use air turbine technology with air driven turbines to harvest the energy from an oscillating water column. This study focuses on the power take-off control of a single stage unidirectional oscillating water column air turbine generator system, and proposes a model predictive control-based speed controller for the generator-turbine assembly. The proposed method is verified with simulation results that show the efficacy of the controller in extracting power from the turbine while maintaining the speed at the desired level.
Synchrophasor-Assisted Prediction of Stability/Instability of a Power System
NASA Astrophysics Data System (ADS)
Saha Roy, Biman Kumar; Sinha, Avinash Kumar; Pradhan, Ashok Kumar
2013-05-01
This paper presents a technique for real-time prediction of stability/instability of a power system based on synchrophasor measurements obtained from phasor measurement units (PMUs) at generator buses. For stability assessment the technique makes use of system severity indices developed using bus voltage magnitude obtained from PMUs and generator electrical power. Generator power is computed using system information and PMU information like voltage and current phasors obtained from PMU. System stability/instability is predicted when the indices exceeds a threshold value. A case study is carried out on New England 10-generator, 39-bus system to validate the performance of the technique.
NASA Astrophysics Data System (ADS)
Boslough, M.
2011-12-01
Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based global temperature anomaly data published by NASS GISS. Typical climate contracts predict the probability of a specified future temperature, but not the probability density or best estimate. One way to generate a probability distribution would be to create a family of contracts over a range of specified temperatures and interpret the price of each contract as its exceedance probability. The resulting plot of probability vs. anomaly is the market-based cumulative density function. The best estimate can be determined by interpolation, and the market-based uncertainty estimate can be based on the spread. One requirement for an effective prediction market is liquidity. Climate contracts are currently considered somewhat of a novelty and often lack sufficient liquidity, but climate change has the potential to generate both tremendous losses for some (e.g. agricultural collapse and extreme weather events) and wealth for others (access to natural resources and trading routes). Use of climate markets by large stakeholders has the potential to generate the liquidity necessary to make them viable. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.
Using multi-species occupancy models in structured decision making on managed lands
Sauer, John R.; Blank, Peter J.; Zipkin, Elise F.; Fallon, Jane E.; Fallon, Frederick W.
2013-01-01
Land managers must balance the needs of a variety of species when manipulating habitats. Structured decision making provides a systematic means of defining choices and choosing among alternative management options; implementation of a structured decision requires quantitative approaches to predicting consequences of management on the relevant species. Multi-species occupancy models provide a convenient framework for making structured decisions when the management objective is focused on a collection of species. These models use replicate survey data that are often collected on managed lands. Occupancy can be modeled for each species as a function of habitat and other environmental features, and Bayesian methods allow for estimation and prediction of collective responses of groups of species to alternative scenarios of habitat management. We provide an example of this approach using data from breeding bird surveys conducted in 2008 at the Patuxent Research Refuge in Laurel, Maryland, evaluating the effects of eliminating meadow and wetland habitats on scrub-successional and woodland-breeding bird species using summed total occupancy of species as an objective function. Removal of meadows and wetlands decreased value of an objective function based on scrub-successional species by 23.3% (95% CI: 20.3–26.5), but caused only a 2% (0.5, 3.5) increase in value of an objective function based on woodland species, documenting differential effects of elimination of meadows and wetlands on these groups of breeding birds. This approach provides a useful quantitative tool for managers interested in structured decision making.
Ding, Ziyun; Nolte, Daniel; Kit Tsang, Chui; Cleather, Daniel J; Kedgley, Angela E; Bull, Anthony M J
2016-02-01
Segment-based musculoskeletal models allow the prediction of muscle, ligament, and joint forces without making assumptions regarding joint degrees-of-freedom (DOF). The dataset published for the "Grand Challenge Competition to Predict in vivo Knee Loads" provides directly measured tibiofemoral contact forces for activities of daily living (ADL). For the Sixth Grand Challenge Competition to Predict in vivo Knee Loads, blinded results for "smooth" and "bouncy" gait trials were predicted using a customized patient-specific musculoskeletal model. For an unblinded comparison, the following modifications were made to improve the predictions: further customizations, including modifications to the knee center of rotation; reductions to the maximum allowable muscle forces to represent known loss of strength in knee arthroplasty patients; and a kinematic constraint to the hip joint to address the sensitivity of the segment-based approach to motion tracking artifact. For validation, the improved model was applied to normal gait, squat, and sit-to-stand for three subjects. Comparisons of the predictions with measured contact forces showed that segment-based musculoskeletal models using patient-specific input data can estimate tibiofemoral contact forces with root mean square errors (RMSEs) of 0.48-0.65 times body weight (BW) for normal gait trials. Comparisons between measured and predicted tibiofemoral contact forces yielded an average coefficient of determination of 0.81 and RMSEs of 0.46-1.01 times BW for squatting and 0.70-0.99 times BW for sit-to-stand tasks. This is comparable to the best validations in the literature using alternative models.
Veksler, Vladislav D; Buchler, Norbou; Hoffman, Blaine E; Cassenti, Daniel N; Sample, Char; Sugrim, Shridat
2018-01-01
Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.
Soleimani, Hossein; Hensman, James; Saria, Suchi
2017-08-21
Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.
Learning predictive statistics from temporal sequences: Dynamics and strategies
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E.; Kourtzi, Zoe
2017-01-01
Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics—that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments. PMID:28973111
Testing Bayesian and heuristic predictions of mass judgments of colliding objects
Sanborn, Adam N.
2014-01-01
Mass judgments of colliding objects have been used to explore people's understanding of the physical world because they are ecologically relevant, yet people display biases that are most easily explained by a small set of heuristics. Recent work has challenged the heuristic explanation, by producing the same biases from a model that copes with perceptual uncertainty by using Bayesian inference with a prior based on the correct combination rules from Newtonian mechanics (noisy Newton). Here I test the predictions of the leading heuristic model (Gilden and Proffitt, 1989) against the noisy Newton model using a novel manipulation of the standard mass judgment task: making one of the objects invisible post-collision. The noisy Newton model uses the remaining information to predict above-chance performance, while the leading heuristic model predicts chance performance when one or the other final velocity is occluded. An experiment using two different types of occlusion showed better-than-chance performance and response patterns that followed the predictions of the noisy Newton model. The results demonstrate that people can make sensible physical judgments even when information critical for the judgment is missing, and that a Bayesian model can serve as a guide in these situations. Possible algorithmic-level accounts of this task that more closely correspond to the noisy Newton model are explored. PMID:25206345
Learning predictive statistics from temporal sequences: Dynamics and strategies.
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe
2017-10-01
Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics-that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Fast Demand Forecast of Electric Vehicle Charging Stations for Cell Phone Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majidpour, Mostafa; Qiu, Charlie; Chung, Ching-Yen
This paper describes the core cellphone application algorithm which has been implemented for the prediction of energy consumption at Electric Vehicle (EV) Charging Stations at UCLA. For this interactive user application, the total time of accessing database, processing the data and making the prediction, needs to be within a few seconds. We analyze four relatively fast Machine Learning based time series prediction algorithms for our prediction engine: Historical Average, kNearest Neighbor, Weighted k-Nearest Neighbor, and Lazy Learning. The Nearest Neighbor algorithm (k Nearest Neighbor with k=1) shows better performance and is selected to be the prediction algorithm implemented for themore » cellphone application. Two applications have been designed on top of the prediction algorithm: one predicts the expected available energy at the station and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is about one second for both applications.« less
A Method for Formulizing Disaster Evacuation Demand Curves Based on SI Model
Song, Yulei; Yan, Xuedong
2016-01-01
The prediction of evacuation demand curves is a crucial step in the disaster evacuation plan making, which directly affects the performance of the disaster evacuation. In this paper, we discuss the factors influencing individual evacuation decision making (whether and when to leave) and summarize them into four kinds: individual characteristics, social influence, geographic location, and warning degree. In the view of social contagion of decision making, a method based on Susceptible-Infective (SI) model is proposed to formulize the disaster evacuation demand curves to address both social influence and other factors’ effects. The disaster event of the “Tianjin Explosions” is used as a case study to illustrate the modeling results influenced by the four factors and perform the sensitivity analyses of the key parameters of the model. Some interesting phenomena are found and discussed, which is meaningful for authorities to make specific evacuation plans. For example, due to the lower social influence in isolated communities, extra actions might be taken to accelerate evacuation process in those communities. PMID:27735875
Artificial neural networks in mammography interpretation and diagnostic decision making.
Ayer, Turgay; Chen, Qiushi; Burnside, Elizabeth S
2013-01-01
Screening mammography is the most effective means for early detection of breast cancer. Although general rules for discriminating malignant and benign lesions exist, radiologists are unable to perfectly detect and classify all lesions as malignant and benign, for many reasons which include, but are not limited to, overlap of features that distinguish malignancy, difficulty in estimating disease risk, and variability in recommended management. When predictive variables are numerous and interact, ad hoc decision making strategies based on experience and memory may lead to systematic errors and variability in practice. The integration of computer models to help radiologists increase the accuracy of mammography examinations in diagnostic decision making has gained increasing attention in the last two decades. In this study, we provide an overview of one of the most commonly used models, artificial neural networks (ANNs), in mammography interpretation and diagnostic decision making and discuss important features in mammography interpretation. We conclude by discussing several common limitations of existing research on ANN-based detection and diagnostic models and provide possible future research directions.
NASA Astrophysics Data System (ADS)
Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.
Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.
McCambridge, Sarah A; Consedine, Nathan S
2014-04-01
This study was designed to experimentally determine whether disgust and embarrassment predict anticipated delay and avoidance in sexual healthcare decision-making and for whom. In the study, 90 participants, aged 18-30, completed web-based questionnaires assessing demographics, current health, and past health behaviors before being gender block randomized to conditions in which disgust, embarrassment, or control (no emotion) were induced. Participants completed health decision-making vignettes covering the disgusting and embarrassing aspects of sexual healthcare. Factorial ANOVAs showed that although there were some complexities in the manipulation, disgust and embarrassment predicted greater anticipated delay and avoidance of elicitors, but only among specific groups. Embarrassment predicted anticipated help-seeking delays for embarrassment elicitors (i.e. sexual history assessment and physical examination), while disgust predicted anticipated help-seeking delays involving disgust elicitors (i.e. collecting genital discharge). However, these effects were moderated, with embarrassment only predicting anticipated delays among individuals reporting multiple sexual partners and disgust predicting anticipated delays and avoidance among persons reporting poorer subjective health. In sum, the current report provides among the first empirical demonstrations that emotions such as embarrassment and disgust may be causally implicated in anticipated delays and avoidance in sexual healthcare. Emotion frameworks may be usefully incorporated into clinical and public health efforts to reduce sexual healthcare delays and avoidance.
Genomic prediction of reproduction traits for Merino sheep.
Bolormaa, S; Brown, D J; Swan, A A; van der Werf, J H J; Hayes, B J; Daetwyler, H D
2017-06-01
Economically important reproduction traits in sheep, such as number of lambs weaned and litter size, are expressed only in females and later in life after most selection decisions are made, which makes them ideal candidates for genomic selection. Accurate genomic predictions would lead to greater genetic gain for these traits by enabling accurate selection of young rams with high genetic merit. The aim of this study was to design and evaluate the accuracy of a genomic prediction method for female reproduction in sheep using daughter trait deviations (DTD) for sires and ewe phenotypes (when individual ewes were genotyped) for three reproduction traits: number of lambs born (NLB), litter size (LSIZE) and number of lambs weaned. Genomic best linear unbiased prediction (GBLUP), BayesR and pedigree BLUP analyses of the three reproduction traits measured on 5340 sheep (4503 ewes and 837 sires) with real and imputed genotypes for 510 174 SNPs were performed. The prediction of breeding values using both sire and ewe trait records was validated in Merino sheep. Prediction accuracy was evaluated by across sire family and random cross-validations. Accuracies of genomic estimated breeding values (GEBVs) were assessed as the mean Pearson correlation adjusted by the accuracy of the input phenotypes. The addition of sire DTD into the prediction analysis resulted in higher accuracies compared with using only ewe records in genomic predictions or pedigree BLUP. Using GBLUP, the average accuracy based on the combined records (ewes and sire DTD) was 0.43 across traits, but the accuracies varied by trait and type of cross-validations. The accuracies of GEBVs from random cross-validations (range 0.17-0.61) were higher than were those from sire family cross-validations (range 0.00-0.51). The GEBV accuracies of 0.41-0.54 for NLB and LSIZE based on the combined records were amongst the highest in the study. Although BayesR was not significantly different from GBLUP in prediction accuracy, it identified several candidate genes which are known to be associated with NLB and LSIZE. The approach provides a way to make use of all data available in genomic prediction for traits that have limited recording. © 2017 Stichting International Foundation for Animal Genetics.
ERIC Educational Resources Information Center
Marini, Jessica P.; Shaw, Emily J.; Young, Linda
2016-01-01
During the transition period between the use of exclusively old SAT® scores and the use of exclusively new SAT scores, college admission offices will be receiving both types of scores from students. Making an admission decision based on new SAT scores can be challenging at first because institutions have methods, procedures, and models based on…
ERIC Educational Resources Information Center
Codding, Robin S.; Petscher, Yaacov; Truckenmiller, Adrea
2015-01-01
A paucity of research has examined the utility of curriculum-based measurement (CBM) for data-based decision making at the secondary level. As schools move to multitiered systems of service delivery, it is conceivable that multiple screening measures will be used that address various academic subject areas. The value of including different CBM…
NASA Astrophysics Data System (ADS)
Blum, David Arthur
Algae biodiesel is the sole sustainable and abundant transportation fuel source that can replace petrol diesel use; however, high competition and economic uncertainties exist, influencing independent venture capital decision making. Technology, market, management, and government action uncertainties influence competition and economic uncertainties in the venture capital industry. The purpose of this qualitative case study was to identify the best practice skills at IVC firms to predict uncertainty between early and late funding stages. The basis of the study was real options theory, a framework used to evaluate and understand the economic and competition uncertainties inherent in natural resource investment and energy derived from plant-based oils. Data were collected from interviews of 24 venture capital partners based in the United States who invest in algae and other renewable energy solutions. Data were analyzed by coding and theme development interwoven with the conceptual framework. Eight themes emerged: (a) expected returns model, (b) due diligence, (c) invest in specific sectors, (d) reduced uncertainty-late stage, (e) coopetition, (f) portfolio firm relationships, (g) differentiation strategy, and (h) modeling uncertainty and best practice. The most noteworthy finding was that predicting uncertainty at the early stage was impractical; at the expansion and late funding stages, however, predicting uncertainty was possible. The implications of these findings will affect social change by providing independent venture capitalists with best practice skills to increase successful exits, lessen uncertainty, and encourage increased funding of renewable energy firms, contributing to cleaner and healthier communities throughout the United States..
A Turn-Projected State-Based Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Lewis, Timothy A.
2013-01-01
State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.
Bornstein, Aaron M.; Daw, Nathaniel D.
2013-01-01
How do we use our memories of the past to guide decisions we've never had to make before? Although extensive work describes how the brain learns to repeat rewarded actions, decisions can also be influenced by associations between stimuli or events not directly involving reward — such as when planning routes using a cognitive map or chess moves using predicted countermoves — and these sorts of associations are critical when deciding among novel options. This process is known as model-based decision making. While the learning of environmental relations that might support model-based decisions is well studied, and separately this sort of information has been inferred to impact decisions, there is little evidence concerning the full cycle by which such associations are acquired and drive choices. Of particular interest is whether decisions are directly supported by the same mnemonic systems characterized for relational learning more generally, or instead rely on other, specialized representations. Here, building on our previous work, which isolated dual representations underlying sequential predictive learning, we directly demonstrate that one such representation, encoded by the hippocampal memory system and adjacent cortical structures, supports goal-directed decisions. Using interleaved learning and decision tasks, we monitor predictive learning directly and also trace its influence on decisions for reward. We quantitatively compare the learning processes underlying multiple behavioral and fMRI observables using computational model fits. Across both tasks, a quantitatively consistent learning process explains reaction times, choices, and both expectation- and surprise-related neural activity. The same hippocampal and ventral stream regions engaged in anticipating stimuli during learning are also engaged in proportion to the difficulty of decisions. These results support a role for predictive associations learned by the hippocampal memory system to be recalled during choice formation. PMID:24339770
Measures and limits of models of fixation selection.
Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter
2011-01-01
Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
Land use planning and wildfire: development policies influence future probability of housing loss
Syphard, Alexandra D.; Massada, Avi Bar; Butsic, Van; Keeley, Jon E.
2013-01-01
Increasing numbers of homes are being destroyed by wildfire in the wildland-urban interface. With projections of climate change and housing growth potentially exacerbating the threat of wildfire to homes and property, effective fire-risk reduction alternatives are needed as part of a comprehensive fire management plan. Land use planning represents a shift in traditional thinking from trying to eliminate wildfires, or even increasing resilience to them, toward avoiding exposure to them through the informed placement of new residential structures. For land use planning to be effective, it needs to be based on solid understanding of where and how to locate and arrange new homes. We simulated three scenarios of future residential development and projected landscape-level wildfire risk to residential structures in a rapidly urbanizing, fire-prone region in southern California. We based all future development on an econometric subdivision model, but we varied the emphasis of subdivision decision-making based on three broad and common growth types: infill, expansion, and leapfrog. Simulation results showed that decision-making based on these growth types, when applied locally for subdivision of individual parcels, produced substantial landscape-level differences in pattern, location, and extent of development. These differences in development, in turn, affected the area and proportion of structures at risk from burning in wildfires. Scenarios with lower housing density and larger numbers of small, isolated clusters of development, i.e., resulting from leapfrog development, were generally predicted to have the highest predicted fire risk to the largest proportion of structures in the study area, and infill development was predicted to have the lowest risk. These results suggest that land use planning should be considered an important component to fire risk management and that consistently applied policies based on residential pattern may provide substantial benefits for future risk reduction.
Vrshek-Schallhorn, Suzanne; Wahlstrom, Dustin; White, Tonya; Luciana, Monica
2013-04-01
Despite interest in dopamine's role in emotion-based decision-making, few reports of the effects of dopamine manipulations are available in this area in humans. This study investigates dopamine's role in emotion-based decision-making through a common measure of this construct, the Iowa Gambling Task (IGT), using Acute Tyrosine Phenylalanine Depletion (ATPD). In a between-subjects design, 40 healthy adults were randomized to receive either an ATPD beverage or a balanced amino acid beverage (a control) prior to completing the IGT, as well as pre- and post-manipulation blood draws for the neurohormone prolactin. Together with conventional IGT performance metrics, choice selections and response latencies were examined separately for good and bad choices before and after several key punishment events. Changes in response latencies were also used to predict total task performance. Prolactin levels increased significantly in the ATPD group but not in the control group. However, no significant group differences in performance metrics were detected, nor were there sex differences in outcome measures. However, the balanced group's bad deck latencies speeded up across the task, while the ATPD group's latencies remained adaptively hesitant. Additionally, modulation of latencies to the bad decks predicted total score for the ATPD group only. One interpretation is that ATPD subtly attenuated reward salience and altered the approach by which individuals achieved successful performance, without resulting in frank group differences in task performance. Copyright © 2013 Elsevier Inc. All rights reserved.
Predicting the thermal conductivity of aluminium alloys in the cryogenic to room temperature range
NASA Astrophysics Data System (ADS)
Woodcraft, Adam L.
2005-06-01
Aluminium alloys are being used increasingly in cryogenic systems. However, cryogenic thermal conductivity measurements have been made on only a few of the many types in general use. This paper describes a method of predicting the thermal conductivity of any aluminium alloy between the superconducting transition temperature (approximately 1 K) and room temperature, based on a measurement of the thermal conductivity or electrical resistivity at a single temperature. Where predictions are based on low temperature measurements (approximately 4 K and below), the accuracy is generally better than 10%. Useful predictions can also be made from room temperature measurements for most alloys, but with reduced accuracy. This method permits aluminium alloys to be used in situations where the thermal conductivity is important without having to make (or find) direct measurements over the entire temperature range of interest. There is therefore greater scope to choose alloys based on mechanical properties and availability, rather than on whether cryogenic thermal conductivity measurements have been made. Recommended thermal conductivity values are presented for aluminium 6082 (based on a new measurement), and for 1000 series, and types 2014, 2024, 2219, 3003, 5052, 5083, 5086, 5154, 6061, 6063, 6082, 7039 and 7075 (based on low temperature measurements in the literature).
Soil-Bacterium Compatibility Model as a Decision-Making Tool for Soil Bioremediation.
Horemans, Benjamin; Breugelmans, Philip; Saeys, Wouter; Springael, Dirk
2017-02-07
Bioremediation of organic pollutant contaminated soil involving bioaugmentation with dedicated bacteria specialized in degrading the pollutant is suggested as a green and economically sound alternative to physico-chemical treatment. However, intrinsic soil characteristics impact the success of bioaugmentation. The feasibility of using partial least-squares regression (PLSR) to predict the success of bioaugmentation in contaminated soil based on the intrinsic physico-chemical soil characteristics and, hence, to improve the success of bioaugmentation, was examined. As a proof of principle, PLSR was used to build soil-bacterium compatibility models to predict the bioaugmentation success of the phenanthrene-degrading Novosphingobium sp. LH128. The survival and biodegradation activity of strain LH128 were measured in 20 soils and correlated with the soil characteristics. PLSR was able to predict the strain's survival using 12 variables or less while the PAH-degrading activity of strain LH128 in soils that show survival was predicted using 9 variables. A three-step approach using the developed soil-bacterium compatibility models is proposed as a decision making tool and first estimation to select compatible soils and organisms and increase the chance of success of bioaugmentation.
AMOEBA 2.0: A physics-first approach to biomolecular simulations
NASA Astrophysics Data System (ADS)
Rackers, Joshua; Ponder, Jay
The goal of the AMOEBA force field project is to use classical physics to understand and predict the nature of interactions between biological molecules. While making significant advances over the past decade, the ultimate goal of predicting binding energies with ``chemical accuracy'' remains elusive. The primary source of this inaccuracy comes from the physics of how molecules interact at short range. For example, despite AMOEBA's advanced treatment of electrostatics, the force field dramatically overpredicts the electrostatic energy of DNA stacking interactions. AMOEBA 2.0 works to correct these errors by including simple, first principles physics-based terms to account for the quantum mechanical nature of these short-range molecular interactions. We have added a charge penetration term that considerably improves the description of electrostatic interactions at short range. We are reformulating the polarization term of AMOEBA in terms of basic physics assertions. And we are reevaluating the van der Waals term to match ab initio energy decompositions. These additions and changes promise to make AMOEBA more predictive. By including more physical detail of the important short-range interactions of biological molecules, we hope to move closer to the ultimate goal of true predictive power.
Advancing Atmospheric River Forecasts into Subseasonal-to-Seasonal Timescales
NASA Astrophysics Data System (ADS)
Barnes, E. A.; Baggett, C.; Mundhenk, B. D.; Nardi, K.; Maloney, E. D.
2017-12-01
Atmospheric rivers can cause considerable mayhem along the west coast of North America - delivering flooding rains during periods of heightened activity and desiccating droughts during periods of reduced activity. The intrinsic chaos of the atmosphere makes the prediction of atmospheric rivers at subseasonal-to-seasonal (S2S) timescales ( 2 to 6 weeks) an inherently difficult task. We demonstrate here that the potential exists to advance forecast lead times of atmospheric rivers into S2S timescales through knowledge of two of the atmosphere's most prominent oscillations; the Madden-Julian oscillation (MJO) and the Quasi-biennial oscillation (QBO). The dynamical relationship between atmospheric rivers, the MJO and the QBO is hypothesized to occur through modulation of North Pacific blocking. We present an empirical prediction scheme for anomalous atmospheric river activity based solely on the MJO and QBO and demonstrate skillful subseasonal "forecasts of opportunity" 5+ weeks ahead. We conclude with a discussion of the ability of state-of-the-art NWP models to predict atmospheric river characteristics on S2S timescales. With the wide-ranging impacts associated with landfalling atmospheric rivers, even modest gains in the subseasonal prediction of anomalous atmospheric river activity may support early action decision making and benefit numerous sectors of society.
Probabilistic modeling of discourse-aware sentence processing.
Dubey, Amit; Keller, Frank; Sturt, Patrick
2013-07-01
Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.
Ecological investigations: vegetation studies, preliminary findings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olgeirson, E.R.; Martin, R.B.
1978-09-01
The objective of the vegetation studies conducted on the research site is to produce a descriptive data base that can be applied to determinations of carrying capacity of the site and surrounding area. Additional information obtained about parameters that influence vegetation growth and maintenance of soil nutrients, and moisture and temperature regimes help define dynamic relationships that must be understood to effect successful revegetation and habitat rehabilitation. The descriptive vegetation baseline also provides a point of departure for design of future monitoring programs, and predictive models and strategies to be used in dealing with impact mitigation; in turn, monitoring programsmore » and predictive modeling form the bases for making distinctions between natural trends and man-induced perturbations.« less
Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation
NASA Astrophysics Data System (ADS)
Ogawa, Masatoshi; Ogai, Harutoshi
Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.
Solway, A.; Botvinick, M.
2013-01-01
Recent work has given rise to the view that reward-based decision making is governed by two key controllers: a habit system, which stores stimulus-response associations shaped by past reward, and a goal-oriented system that selects actions based on their anticipated outcomes. The current literature provides a rich body of computational theory addressing habit formation, centering on temporal-difference learning mechanisms. Less progress has been made toward formalizing the processes involved in goal-directed decision making. We draw on recent work in cognitive neuroscience, animal conditioning, cognitive and developmental psychology and machine learning, to outline a new theory of goal-directed decision making. Our basic proposal is that the brain, within an identifiable network of cortical and subcortical structures, implements a probabilistic generative model of reward, and that goal-directed decision making is effected through Bayesian inversion of this model. We present a set of simulations implementing the account, which address benchmark behavioral and neuroscientific findings, and which give rise to a set of testable predictions. We also discuss the relationship between the proposed framework and other models of decision making, including recent models of perceptual choice, to which our theory bears a direct connection. PMID:22229491
PatchSurfers: Two methods for local molecular property-based binding ligand prediction.
Shin, Woong-Hee; Bures, Mark Gregory; Kihara, Daisuke
2016-01-15
Protein function prediction is an active area of research in computational biology. Function prediction can help biologists make hypotheses for characterization of genes and help interpret biological assays, and thus is a productive area for collaboration between experimental and computational biologists. Among various function prediction methods, predicting binding ligand molecules for a target protein is an important class because ligand binding events for a protein are usually closely intertwined with the proteins' biological function, and also because predicted binding ligands can often be directly tested by biochemical assays. Binding ligand prediction methods can be classified into two types: those which are based on protein-protein (or pocket-pocket) comparison, and those that compare a target pocket directly to ligands. Recently, our group proposed two computational binding ligand prediction methods, Patch-Surfer, which is a pocket-pocket comparison method, and PL-PatchSurfer, which compares a pocket to ligand molecules. The two programs apply surface patch-based descriptions to calculate similarity or complementarity between molecules. A surface patch is characterized by physicochemical properties such as shape, hydrophobicity, and electrostatic potentials. These properties on the surface are represented using three-dimensional Zernike descriptors (3DZD), which are based on a series expansion of a 3 dimensional function. Utilizing 3DZD for describing the physicochemical properties has two main advantages: (1) rotational invariance and (2) fast comparison. Here, we introduce Patch-Surfer and PL-PatchSurfer with an emphasis on PL-PatchSurfer, which is more recently developed. Illustrative examples of PL-PatchSurfer performance on binding ligand prediction as well as virtual drug screening are also provided. Copyright © 2015 Elsevier Inc. All rights reserved.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
Rowlinson, Steve; Jia, Yunyan Andrea
2014-04-01
Existing heat stress risk management guidelines recommended by international standards are not practical for the construction industry which needs site supervision staff to make instant managerial decisions to mitigate heat risks. The ability of the predicted heat strain (PHS) model [ISO 7933 (2004). Ergonomics of the thermal environment analytical determination and interpretation of heat stress using calculation of the predicted heat strain. Geneva: International Standard Organisation] to predict maximum allowable exposure time (D lim) has now enabled development of localized, action-triggering and threshold-based guidelines for implementation by lay frontline staff on construction sites. This article presents a protocol for development of two heat stress management tools by applying the PHS model to its full potential. One of the tools is developed to facilitate managerial decisions on an optimized work-rest regimen for paced work. The other tool is developed to enable workers' self-regulation during self-paced work.
Forecasting PM10 in metropolitan areas: Efficacy of neural networks.
Fernando, H J S; Mammarella, M C; Grandoni, G; Fedele, P; Di Marco, R; Dimitrova, R; Hyde, P
2012-04-01
Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or 'nodes' capable of 'learning through training' via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations. Copyright © 2011 Elsevier Ltd. All rights reserved.
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
Water Habitat Study: Prediction Makes It More Meaningful.
ERIC Educational Resources Information Center
Glasgow, Dennis R.
1982-01-01
Suggests a teaching strategy for water habitat studies to help students make a meaningful connection between physiochemical data (dissolved oxygen content, pH, and water temperature) and biological specimens they collect. Involves constructing a poster and using it to make predictions. Provides sample poster. (DC)
Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches.
Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael
2015-09-08
As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.
Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches
Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael
2015-01-01
As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras. PMID:26370997
Accurate prediction of energy expenditure using a shoe-based activity monitor.
Sazonova, Nadezhda; Browning, Raymond C; Sazonov, Edward
2011-07-01
The aim of this study was to develop and validate a method for predicting energy expenditure (EE) using a footwear-based system with integrated accelerometer and pressure sensors. We developed a footwear-based device with an embedded accelerometer and insole pressure sensors for the prediction of EE. The data from the device can be used to perform accurate recognition of major postures and activities and to estimate EE using the acceleration, pressure, and posture/activity classification information in a branched algorithm without the need for individual calibration. We measured EE via indirect calorimetry as 16 adults (body mass index=19-39 kg·m) performed various low- to moderate-intensity activities and compared measured versus predicted EE using several models based on the acceleration and pressure signals. Inclusion of pressure data resulted in better accuracy of EE prediction during static postures such as sitting and standing. The activity-based branched model that included predictors from accelerometer and pressure sensors (BACC-PS) achieved the lowest error (e.g., root mean squared error (RMSE)=0.69 METs) compared with the accelerometer-only-based branched model BACC (RMSE=0.77 METs) and nonbranched model (RMSE=0.94-0.99 METs). Comparison of EE prediction models using data from both legs versus models using data from a single leg indicates that only one shoe needs to be equipped with sensors. These results suggest that foot acceleration combined with insole pressure measurement, when used in an activity-specific branched model, can accurately estimate the EE associated with common daily postures and activities. The accuracy and unobtrusiveness of a footwear-based device may make it an effective physical activity monitoring tool.
Computer predictions on Rh-based double perovskites with unusual electronic and magnetic properties
NASA Astrophysics Data System (ADS)
Halder, Anita; Nafday, Dhani; Sanyal, Prabuddha; Saha-Dasgupta, Tanusri
2018-03-01
In search for new magnetic materials, we make computer prediction of structural, electronic and magnetic properties of yet-to-be synthesized Rh-based double perovskite compounds, Sr(Ca)2BRhO6 (B=Cr, Mn, Fe). We use combination of evolutionary algorithm, density functional theory, and statistical-mechanical tool for this purpose. We find that the unusual valence of Rh5+ may be stabilized in these compounds through formation of oxygen ligand hole. Interestingly, while the Cr-Rh and Mn-Rh compounds are predicted to be ferromagnetic half-metals, the Fe-Rh compounds are found to be rare examples of antiferromagnetic and metallic transition-metal oxide with three-dimensional electronic structure. The computed magnetic transition temperatures of the predicted compounds, obtained from finite temperature Monte Carlo study of the first principles-derived model Hamiltonian, are found to be reasonably high. The prediction of favorable growth condition of the compounds, reported in our study, obtained through extensive thermodynamic analysis should be useful for future synthesize of this interesting class of materials with intriguing properties.
NASA Astrophysics Data System (ADS)
Kaur, Jagreet; Singh Mann, Kulwinder, Dr.
2018-01-01
AI in Healthcare needed to bring real, actionable insights and Individualized insights in real time for patients and Doctors to support treatment decisions., We need a Patient Centred Platform for integrating EHR Data, Patient Data, Prescriptions, Monitoring, Clinical research and Data. This paper proposes a generic architecture for enabling AI based healthcare analytics Platform by using open sources Technologies Apache beam, Apache Flink Apache Spark, Apache NiFi, Kafka, Tachyon, Gluster FS, NoSQL- Elasticsearch, Cassandra. This paper will show the importance of applying AI based predictive and prescriptive analytics techniques in Health sector. The system will be able to extract useful knowledge that helps in decision making and medical monitoring in real-time through an intelligent process analysis and big data processing.
A predictive model for assistive technology adoption for people with dementia.
Zhang, Shuai; McClean, Sally I; Nugent, Chris D; Donnelly, Mark P; Galway, Leo; Scotney, Bryan W; Cleland, Ian
2014-01-01
Assistive technology has the potential to enhance the level of independence of people with dementia, thereby increasing the possibility of supporting home-based care. In general, people with dementia are reluctant to change; therefore, it is important that suitable assistive technologies are selected for them. Consequently, the development of predictive models that are able to determine a person's potential to adopt a particular technology is desirable. In this paper, a predictive adoption model for a mobile phone-based video streaming system, developed for people with dementia, is presented. Taking into consideration characteristics related to a person's ability, living arrangements, and preferences, this paper discusses the development of predictive models, which were based on a number of carefully selected data mining algorithms for classification. For each, the learning on different relevant features for technology adoption has been tested, in conjunction with handling the imbalance of available data for output classes. Given our focus on providing predictive tools that could be used and interpreted by healthcare professionals, models with ease-of-use, intuitive understanding, and clear decision making processes are preferred. Predictive models have, therefore, been evaluated on a multi-criterion basis: in terms of their prediction performance, robustness, bias with regard to two types of errors and usability. Overall, the model derived from incorporating a k-Nearest-Neighbour algorithm using seven features was found to be the optimal classifier of assistive technology adoption for people with dementia (prediction accuracy 0.84 ± 0.0242).
2012-01-01
Background The first draft assembly and gene prediction of the grapevine genome (8X base coverage) was made available to the scientific community in 2007, and functional annotation was developed on this gene prediction. Since then additional Sanger sequences were added to the 8X sequences pool and a new version of the genomic sequence with superior base coverage (12X) was produced. Results In order to more efficiently annotate the function of the genes predicted in the new assembly, it is important to build on as much of the previous work as possible, by transferring 8X annotation of the genome to the 12X version. The 8X and 12X assemblies and gene predictions of the grapevine genome were compared to answer the question, “Can we uniquely map 8X predicted genes to 12X predicted genes?” The results show that while the assemblies and gene structure predictions are too different to make a complete mapping between them, most genes (18,725) showed a one-to-one relationship between 8X predicted genes and the last version of 12X predicted genes. In addition, reshuffled genomic sequence structures appeared. These highlight regions of the genome where the gene predictions need to be taken with caution. Based on the new grapevine gene functional annotation and in-depth functional categorization, twenty eight new molecular networks have been created for VitisNet while the existing networks were updated. Conclusions The outcomes of this study provide a functional annotation of the 12X genes, an update of VitisNet, the system of the grapevine molecular networks, and a new functional categorization of genes. Data are available at the VitisNet website (http://www.sdstate.edu/ps/research/vitis/pathways.cfm). PMID:22554261
Zendehrouh, Sareh
2015-11-01
Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
The string prediction models as invariants of time series in the forex market
NASA Astrophysics Data System (ADS)
Pincak, R.
2013-12-01
In this paper we apply a new approach of string theory to the real financial market. The models are constructed with an idea of prediction models based on the string invariants (PMBSI). The performance of PMBSI is compared to support vector machines (SVM) and artificial neural networks (ANN) on an artificial and a financial time series. A brief overview of the results and analysis is given. The first model is based on the correlation function as invariant and the second one is an application based on the deviations from the closed string/pattern form (PMBCS). We found the difference between these two approaches. The first model cannot predict the behavior of the forex market with good efficiency in comparison with the second one which is, in addition, able to make relevant profit per year. The presented string models could be useful for portfolio creation and financial risk management in the banking sector as well as for a nonlinear statistical approach to data optimization.
Induced optimism as mental rehearsal to decrease depressive predictive certainty.
Miranda, Regina; Weierich, Mariann; Khait, Valerie; Jurska, Justyna; Andersen, Susan M
2017-03-01
The present study examined whether practice in making optimistic future-event predictions would result in change in the hopelessness-related cognitions that characterize depression. Individuals (N = 170) with low, mild, and moderate-to-severe depressive symptoms were randomly assigned to a condition in which they practiced making optimistic future-event predictions or to a control condition in which they viewed the same stimuli but practiced determining whether a given phrase contained an adjective. Overall, individuals in the induced optimism condition showed increases in optimistic predictions, relative to the control condition, as a result of practice, but only individuals with moderate-to-severe symptoms of depression who practiced making optimistic future-event predictions showed decreases in depressive predictive certainty, relative to the control condition. In addition, they showed gains in efficiency in making optimistic predictions over the practice blocks, as assessed by response time. There was no difference in depressed mood by practice condition. Mental rehearsal might be one way of changing the hopelessness-related cognitions that characterize depression. Copyright © 2016 Elsevier Ltd. All rights reserved.
Degradation Prediction Model Based on a Neural Network with Dynamic Windows
Zhang, Xinghui; Xiao, Lei; Kang, Jianshe
2015-01-01
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Piascik, Robert S.; Newman, James C., Jr.
1999-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, C. E.; Piascik, R. S.; Newman, J. C., Jr.
2000-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
Contrasting cue-density effects in causal and prediction judgments.
Vadillo, Miguel A; Musca, Serban C; Blanco, Fernando; Matute, Helena
2011-02-01
Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.
NASA Technical Reports Server (NTRS)
Witt, Kenneth J.; Stanley, Jason; Shendock, Robert; Mandl, Daniel
2005-01-01
Space Technology 5 (ST-5) is a three-satellite constellation, technology validation mission under the New Millennium Program at NASA to be launched in March 2006. One of the key technologies to be validated is a lights-out, model-based operations approach to be used for one week to control the ST-5 constellation with no manual intervention. The ground architecture features the GSFC Mission Services Evolution Center (GMSEC) middleware, which allows easy plugging in of software components and a standardized messaging protocol over a software bus. A predictive modeling tool built on MatLab's Simulink software package makes use of the GMSEC standard messaging protocol to interface to the Advanced Mission Planning System (AMPS) Scenario Scheduler which controls all activities, resource allocation and real-time re-profiling of constellation resources when non-nominal events occur. The key features of this system, which we refer to as the ST-5 Simulink system, are as follows: Original daily plan is checked to make sure that predicted resources needed are available by comparing the plan against the model. As the plan is run in real-time, the system re-profiles future activities in real-time if planned activities do not occur in the predicted timeframe or fashion. Alert messages are sent out on the GMSEC bus by the system if future predicted problems are detected. This will allow the Scenario Scheduler to correct the situation before the problem happens. The predictive model is evolved automatically over time via telemetry updates thus reducing the cost of implementing and maintaining the models by an order of magnitude from previous efforts at GSFC such as the model-based system built for MAP in the mid-1990's. This paper will describe the key features, lessons learned and implications for future missions once this system is successfully validated on-orbit in 2006.
The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eden, H.F.; Mooers, C.N.K.
1990-06-01
The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less
Kiviniemi, Marc T; Brown-Kramer, Carolyn R
2015-05-01
Most health decision-making models posit that deciding to engage in a health behavior involves forming a behavioral intention which then leads to actual behavior. However, behavioral intentions and actual behavior may not be functionally equivalent. Two studies examined whether decision-making factors predicting dietary behaviors were the same as or distinct from those predicting intentions. Actual dietary behavior was proximally predicted by affective associations with the behavior. By contrast, behavioral intentions were predicted by cognitive beliefs about behaviors, with no contribution of affective associations. This dissociation has implications for understanding individual regulation of health behaviors and for behavior change interventions. © The Author(s) 2015.
ERIC Educational Resources Information Center
Scherer, Aaron M.; Windschitl, Paul D.; O'Rourke, Jillian; Smith, Andrew R.
2012-01-01
People must often engage in sequential sampling in order to make predictions about the relative quantities of two options. We investigated how directional motives influence sampling selections and resulting predictions in such cases. We used a paradigm in which participants had limited time to sample items and make predictions about which side of…
A simplified model for tritium permeation transient predictions when trapping is active*1
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1994-09-01
This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.
Contingency Management and Deliberative Decision-Making Processes.
Regier, Paul S; Redish, A David
2015-01-01
Contingency management is an effective treatment for drug addiction. The current explanation for its success is rooted in alternative reinforcement theory. We suggest that alternative reinforcement theory is inadequate to explain the success of contingency management and produce a model based on demand curves that show how little the monetary rewards offered in this treatment would affect drug use. Instead, we offer an explanation of its success based on the concept that it accesses deliberative decision-making processes. We suggest that contingency management is effective because it offers a concrete and immediate alternative to using drugs, which engages deliberative processes, improves the ability of those deliberative processes to attend to non-drug options, and offsets more automatic action-selection systems. This theory makes explicit predictions that can be tested, suggests which users will be most helped by contingency management, and suggests improvements in its implementation.
Contingency Management and Deliberative Decision-Making Processes
Regier, Paul S.; Redish, A. David
2015-01-01
Contingency management is an effective treatment for drug addiction. The current explanation for its success is rooted in alternative reinforcement theory. We suggest that alternative reinforcement theory is inadequate to explain the success of contingency management and produce a model based on demand curves that show how little the monetary rewards offered in this treatment would affect drug use. Instead, we offer an explanation of its success based on the concept that it accesses deliberative decision-making processes. We suggest that contingency management is effective because it offers a concrete and immediate alternative to using drugs, which engages deliberative processes, improves the ability of those deliberative processes to attend to non-drug options, and offsets more automatic action-selection systems. This theory makes explicit predictions that can be tested, suggests which users will be most helped by contingency management, and suggests improvements in its implementation. PMID:26082725
Cue-based decision making. A new framework for understanding the uninvolved food consumer.
Hamlin, Robert P
2010-08-01
This article examines the processes that occur within the consumer's head as they make a choice between alternative market offers at a low level of involvement. It discusses recent research that indicates that the Theory of Planned Behaviour and its derivatives have restricted validity as a predictor of food consumers' evaluations and purchase patterns. This has significant implications as Planned Behaviour is the dominant paradigm within food industry research. The article demonstrates that Planned Behaviour has acquired this status more by default than by proven merit. The specific reasons for the failure of Planned Behaviour are discussed. An alternative paradigm, Cue-Based Decision Making is developed from an existing literature, and is proposed as a basis for increasing our understanding of the uninvolved food consumer in order to predict and influence their behaviour. 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Boudala, Faisal; Wu, Di; Gultepe, Ismail; Anderson, Martha; turcotte, marie-france
2017-04-01
In-flight aircraft icing is one of the major weather hazards to aviation . It occurs when an aircraft passes through a cloud layer containing supercooled drops (SD). The SD in contact with the airframe freezes on the surface which degrades the performance of the aircraft.. Prediction of in-flight icing requires accurate prediction of SD sizes, liquid water content (LWC), and temperature. The current numerical weather predicting (NWP) models are not capable of making accurate prediction of SD sizes and associated LWC. Aircraft icing environment is normally studied by flying research aircraft, which is quite expensive. Thus, developing a ground based remote sensing system for detection of supercooled liquid clouds and characterization of their impact on severity of aircraft icing one of the important tasks for improving the NWPs based predictions and validations. In this respect, Environment and Climate Change Canada (ECCC) in cooperation with the Department of National Defense (DND) installed a number of specialized ground based remote sensing platforms and present weather sensors at Cold Lake, Alberta that includes a multi-channel microwave radiometer (MWR), K-band Micro Rain radar (MRR), Ceilometer, Parsivel distrometer and Vaisala PWD22 present weather sensor. In this study, a number of pilot reports confirming icing events and freezing precipitation that occurred at Cold Lake during the 2014-2016 winter periods and associated observation data for the same period are examined. The icing events are also examined using aircraft icing intensity estimated using ice accumulation model which is based on a cylindrical shape approximation of airfoil and the Canadian High Resolution Regional Deterministic Prediction System (HRDPS) model predicted LWC, median volume diameter and temperature. The results related to vertical atmospheric profiling conditions, surface observations, and the Canadian High Resolution Regional Deterministic Prediction System (HRDPS) model predictions are given. Preliminary results suggest that remote sensing and present weather sensors based observations of cloud SD regions can be used to describe micro and macro physical characteristics of the icing conditions. The model based icing intensity prediction reasonably agreed with the PIREPs and MWR observations.
What Makes for a Good Teacher and Who Can Tell? Working Paper 30
ERIC Educational Resources Information Center
Harris, Douglas N.; Sass, Tim R.
2009-01-01
Mounting pressure in the policy arena to improve teacher productivity either by improving signals that predict teacher performance or through creating incentive contracts based on performance--has spurred two related questions: Are there important determinants of teacher productivity that are not captured by teacher credentials but that can be…
Examining Secondary Writing: Curriculum-Based Measures and Six Traits
ERIC Educational Resources Information Center
Havlin, Patricia J.
2013-01-01
Writing assessments have taken two primary forms in the past two decades: direct and indirect. Irrespective of type, either form needs to be anchored to making decisions in the classroom and predicting performance on high-stakes tests, particularly in a high-stakes environment with serious consequences. In this study, 11th-grade students were…
Using Web-Based Collaborative Forecasting to Enhance Information Literacy and Disciplinary Knowledge
ERIC Educational Resources Information Center
Buckley, Patrick; Doyle, Elaine
2016-01-01
This paper outlines how an existing collaborative forecasting tool called a prediction market (PM) can be integrated into an educational context to enhance information literacy skills and cognitive disciplinary knowledge. The paper makes a number of original contributions. First, it describes how this tool can be packaged as a pedagogical…
USDA-ARS?s Scientific Manuscript database
Objective: To examine the risk factors of developing functional decline and make probabilistic predictions by using a tree-based method that allows higher order polynomials and interactions of the risk factors. Methods: The conditional inference tree analysis, a data mining approach, was used to con...
Working Memory Involved in Predicting Future Outcomes Based on Past Experiences
ERIC Educational Resources Information Center
Dretsch, Michael N.; Tipples, Jason
2008-01-01
Deficits in working memory have been shown to contribute to poor performance on the Iowa Gambling Task [IGT: Bechara, A., & Martin, E.M. (2004). "Impaired decision making related to working memory deficits in individuals with substance addictions." "Neuropsychology," 18, 152-162]. Similarly, a secondary memory load task has been shown to impair…
Predicting Effective Course Conduction Strategy Using Datamining Techniques
ERIC Educational Resources Information Center
Parkavi, A.; Lakshmi, K.; Srinivasa, K. G.
2017-01-01
Data analysis techniques can be used to analyze the pattern of data in different fields. Based on the analysis' results, it is recommended that suggestions be provided to decision making authorities. The data mining techniques can be used in educational domain to improve the outcome of the educational sectors. The authors carried out this research…
Read-across is a technique used to fill data gaps within chemical safety assessments. It is based on the premise that chemicals with similar structures are likely to have similar biological activities. Known information on the property of a chemical (source) is used to make a pre...
Comparability of Essay Question Variants
ERIC Educational Resources Information Center
Bridgeman, Brent; Trapani, Catherine; Bivens-Tatum, Jennifer
2011-01-01
Writing task variants can increase test security in high-stakes essay assessments by substantially increasing the pool of available writing stimuli and by making the specific writing task less predictable. A given prompt (parent) may be used as the basis for one or more different variants. Six variant types based on argument essay prompts from a…
Quality Quandaries: Predicting a Population of Curves
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
2017-12-19
We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.
Quality Quandaries: Predicting a Population of Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.
Evaluation of Communication about Groups: The Hydra Phenomenon.
ERIC Educational Resources Information Center
Desmond, Roger Jon; Bezzini, John
A study investigated how the attribution of a problem solution to an individual or group affects the consumer's perception of the solution's quality. Based on the tendency to support group decision-making (Hydra phenomenon) it was predicted that decisions attributed to groups would be perceived as higher in quality than those made by individuals,…
Regional-scale air quality models are used to estimate the response of air pollutants to potential emission control strategies as part of the decision-making process. Traditionally, the model predicted pollutant concentrations are evaluated for the “base case” to assess a model’s...
How To Make an Impact with Planetary Science. Part II.
ERIC Educational Resources Information Center
Scott, Robert
2002-01-01
Explains how the moon provides information about the evolution of the solar system and offers scope for physics-based investigations. Uses statistical analysis of real scientific data with which students can predict the diameter and depth of impact craters then compare them with data gathered in institutions or laboratories. (Author/YDS)
This research makes use of in vitro and in vivo approaches to understand and discriminate the compensatory and toxicological responses of the highly regulated HPT system. Development of an initial systems model will be based on the current understanding of the HPT axis and the co...
Seasonal fire danger forecasts for the USA
J. Roads; F. Fujioka; S. Chen; R. Burgan
2005-01-01
The Scripps Experimental Climate Prediction Center has been making experimental, near-real-time, weekly to seasonal fire danger forecasts for the past 5 years. US fire danger forecasts and validations are based on standard indices from the National Fire Danger Rating System (DFDRS), which include the ignition component (IC), energy release component (ER), burning...
Knowledge-Based Inferences across the Hemispheres: Domain Makes a Difference
ERIC Educational Resources Information Center
Shears, Connie; Hawkins, Amanda; Varner, Andria; Lewis, Lindsey; Heatley, Jennifer; Twachtmann, Lisa
2008-01-01
Language comprehension occurs when the left-hemisphere (LH) and the right-hemisphere (RH) share information derived from discourse [Beeman, M. J., Bowden, E. M., & Gernsbacher, M. A. (2000). Right and left hemisphere cooperation for drawing predictive and coherence inferences during normal story comprehension. "Brain and Language, 71", 310-336].…
A universal deep learning approach for modeling the flow of patients under different severities.
Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L
2018-02-01
The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.
A Limitation with Least Squares Predictions
ERIC Educational Resources Information Center
Bittner, Teresa L.
2013-01-01
Although researchers have documented that some data make larger contributions than others to predictions made with least squares models, it is relatively unknown that some data actually make no contribution to the predictions produced by these models. This article explores such noncontributory data. (Contains 1 table and 2 figures.)
Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai
2015-01-01
Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368
Prognostics and Health Monitoring: Application to Electric Vehicles
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.
2017-01-01
As more and more autonomous electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of remaining useful life of the systemssubsystems, specifically the electrical powertrain. In case of electric aircrafts, computing remaining flying time is safety-critical, since an aircraft that runs out of power (battery charge) while in the air will eventually lose control leading to catastrophe. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle.Our research approach is to develop a system level health monitoring safety indicator either to the pilotautopilot for the electric vehicles which runs estimation and prediction algorithms to estimate remaining useful life of the vehicle e.g. determine state-of-charge in batteries. Given models of the current and future system behavior, a general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.
Proactive Supply Chain Performance Management with Predictive Analytics
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605
Proactive supply chain performance management with predictive analytics.
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.
A Predictive Model for Medical Events Based on Contextual Embedding of Temporal Sequences
Wang, Zhimu; Huang, Yingxiang; Wang, Shuang; Wang, Fei; Jiang, Xiaoqian
2016-01-01
Background Medical concepts are inherently ambiguous and error-prone due to human fallibility, which makes it hard for them to be fully used by classical machine learning methods (eg, for tasks like early stage disease prediction). Objective Our work was to create a new machine-friendly representation that resembles the semantics of medical concepts. We then developed a sequential predictive model for medical events based on this new representation. Methods We developed novel contextual embedding techniques to combine different medical events (eg, diagnoses, prescriptions, and labs tests). Each medical event is converted into a numerical vector that resembles its “semantics,” via which the similarity between medical events can be easily measured. We developed simple and effective predictive models based on these vectors to predict novel diagnoses. Results We evaluated our sequential prediction model (and standard learning methods) in estimating the risk of potential diseases based on our contextual embedding representation. Our model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.79 on chronic systolic heart failure and an average AUC of 0.67 (over the 80 most common diagnoses) using the Medical Information Mart for Intensive Care III (MIMIC-III) dataset. Conclusions We propose a general early prognosis predictor for 80 different diagnoses. Our method computes numeric representation for each medical event to uncover the potential meaning of those events. Our results demonstrate the efficiency of the proposed method, which will benefit patients and physicians by offering more accurate diagnosis. PMID:27888170
NASA Astrophysics Data System (ADS)
King, Steven Gray
Geographic information systems (GIS) reveal relationships and patterns from large quantities of diverse data in the form of maps and reports. The United States spends billions of dollars to use GIS to improve decisions made during responses to natural disasters and terrorist attacks, but precisely how GIS improves or impairs decision making is not known. This research examined how GIS affect decision making during natural disasters, and how GIS can be more effectively used to improve decision making for emergency management. Using a qualitative case study methodology, this research examined decision making at the U.S. Department of Homeland Security (DHS) during a large full-scale disaster exercise. This study indicates that GIS provided decision makers at DHS with an outstanding context for information that would otherwise be challenging to understand, especially through the integration of multiple data sources and dynamic three-dimensional interactive maps. Decision making was hampered by outdated information, a reliance on predictive models based on hypothetical data rather than actual event data, and a lack of understanding of the capabilities of GIS beyond cartography. Geospatial analysts, emergency managers, and other decision makers who use GIS should take specific steps to improve decision making based on GIS for disaster response and emergency management.
Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions
NASA Astrophysics Data System (ADS)
Wrench, Alan A.
Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).
Modeling Interdependent and Periodic Real-World Action Sequences
Kurashima, Takeshi; Althoff, Tim; Leskovec, Jure
2018-01-01
Mobile health applications, including those that track activities such as exercise, sleep, and diet, are becoming widely used. Accurately predicting human actions in the real world is essential for targeted recommendations that could improve our health and for personalization of these applications. However, making such predictions is extremely difficult due to the complexities of human behavior, which consists of a large number of potential actions that vary over time, depend on each other, and are periodic. Previous work has not jointly modeled these dynamics and has largely focused on item consumption patterns instead of broader types of behaviors such as eating, commuting or exercising. In this work, we develop a novel statistical model, called TIPAS, for Time-varying, Interdependent, and Periodic Action Sequences. Our approach is based on personalized, multivariate temporal point processes that model time-varying action propensities through a mixture of Gaussian intensities. Our model captures short-term and long-term periodic interdependencies between actions through Hawkes process-based self-excitations. We evaluate our approach on two activity logging datasets comprising 12 million real-world actions (e.g., eating, sleep, and exercise) taken by 20 thousand users over 17 months. We demonstrate that our approach allows us to make successful predictions of future user actions and their timing. Specifically, TIPAS improves predictions of actions, and their timing, over existing methods across multiple datasets by up to 156%, and up to 37%, respectively. Performance improvements are particularly large for relatively rare and periodic actions such as walking and biking, improving over baselines by up to 256%. This demonstrates that explicit modeling of dependencies and periodicities in real-world behavior enables successful predictions of future actions, with implications for modeling human behavior, app personalization, and targeting of health interventions. PMID:29780977
Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system
NASA Astrophysics Data System (ADS)
Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.
2018-02-01
In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.
Chen, J D; Sun, H L
1999-04-01
Objective. To assess and predict reliability of an equipment dynamically by making full use of various test informations in the development of products. Method. A new reliability growth assessment method based on army material system analysis activity (AMSAA) model was developed. The method is composed of the AMSAA model and test data conversion technology. Result. The assessment and prediction results of a space-borne equipment conform to its expectations. Conclusion. It is suggested that this method should be further researched and popularized.
Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer
NASA Astrophysics Data System (ADS)
Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana
2017-03-01
Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Episodic memories predict adaptive value-based decision-making
Murty, Vishnu; FeldmanHall, Oriel; Hunter, Lindsay E.; Phelps, Elizabeth A; Davachi, Lila
2016-01-01
Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory—specifically item versus associative memory—in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to re-engage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to re-engage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations—such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior. PMID:26999046
Competence and Quality in Real-Life Decision Making.
Geisler, Martin; Allwood, Carl Martin
2015-01-01
What distinguishes a competent decision maker and how should the issue of decision quality be approached in a real-life context? These questions were explored in three studies. In Study 1, using a web-based questionnaire and targeting a community sample, we investigated the relationships between objective and subjective indicators of real-life decision-making success. In Study 2 and 3, targeting two different samples of professionals, we explored if the prevalent cognitively oriented definition of decision-making competence could be beneficially expanded by adding aspects of competence in terms of social skills and time-approach. The predictive power for each of these three aspects of decision-making competence was explored for different indicators of real-life decision-making success. Overall, our results suggest that research on decision-making competence would benefit by expanding the definition of competence, by including decision-related abilities in terms of social skills and time-approach. Finally, the results also indicate that individual differences in real-life decision-making success profitably can be approached and measured by different criteria.
Connecting clinical and actuarial prediction with rule-based methods.
Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H
2015-06-01
Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Peng, L.; Pan, H.; Ma, H.; Zhao, P.; Qin, R.; Deng, C.
2017-12-01
The irreducible water saturation (Swir) is a vital parameter for permeability prediction and original oil and gas estimation. However, the complex pore structure of the rocks makes the parameter difficult to be calculated from both laboratory and conventional well logging methods. In this study, an effective statistical method to predict Swir is derived directly from nuclear magnetic resonance (NMR) data based on fractal theory. The spectrum of transversal relaxation time (T2) is normally considered as an indicator of pore size distribution, and the micro- and meso-pore's fractal dimension in two specific range of T2 spectrum distribution are calculated. Based on the analysis of the fractal characteristics of 22 core samples, which were drilled from four boreholes of tight lithologic oil reservoirs of Ordos Basin in China, the positive correlation between Swir and porosity is derived. Afterwards a predicting model for Swir based on linear regressions of fractal dimensions is proposed. It reveals that the Swir is controlled by the pore size and the roughness of the pore. The reliability of this model is tested and an ideal consistency between predicted results and experimental data is found. This model is a reliable supplementary to predict the irreducible water saturation in the case that T2 cutoff value cannot be accurately determined.
Malinowski, Douglas P
2007-05-01
In recent years, the application of genomic and proteomic technologies to the problem of breast cancer prognosis and the prediction of therapy response have begun to yield encouraging results. Independent studies employing transcriptional profiling of primary breast cancer specimens using DNA microarrays have identified gene expression profiles that correlate with clinical outcome in primary breast biopsy specimens. Recent advances in microarray technology have demonstrated reproducibility, making clinical applications more achievable. In this regard, one such DNA microarray device based upon a 70-gene expression signature was recently cleared by the US FDA for application to breast cancer prognosis. These DNA microarrays often employ at least 70 gene targets for transcriptional profiling and prognostic assessment in breast cancer. The use of PCR-based methods utilizing a small subset of genes has recently demonstrated the ability to predict the clinical outcome in early-stage breast cancer. Furthermore, protein-based immunohistochemistry methods have progressed from using gene clusters and gene expression profiling to smaller subsets of expressed proteins to predict prognosis in early-stage breast cancer. Beyond prognostic applications, DNA microarray-based transcriptional profiling has demonstrated the ability to predict response to chemotherapy in early-stage breast cancer patients. In this review, recent advances in the use of multiple markers for prognosis of disease recurrence in early-stage breast cancer and the prediction of therapy response will be discussed.
Valencia-Palomo, G; Rossiter, J A
2011-01-01
This paper makes two key contributions. First, it tackles the issue of the availability of constrained predictive control for low-level control loops. Hence, it describes how the constrained control algorithm is embedded in an industrial programmable logic controller (PLC) using the IEC 61131-3 programming standard. Second, there is a definition and implementation of a novel auto-tuned predictive controller; the key novelty is that the modelling is based on relatively crude but pragmatic plant information. Laboratory experiment tests were carried out in two bench-scale laboratory systems to prove the effectiveness of the combined algorithm and hardware solution. For completeness, the results are compared with a commercial proportional-integral-derivative (PID) controller (also embedded in the PLC) using the most up to date auto-tuning rules. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Social Sentiment Sensor in Twitter for Predicting Cyber-Attacks Using ℓ1 Regularization
Sanchez-Perez, Gabriel; Toscano-Medina, Karina; Martinez-Hernandez, Victor; Olivares-Mercado, Jesus; Sanchez, Victor
2018-01-01
In recent years, online social media information has been the subject of study in several data science fields due to its impact on users as a communication and expression channel. Data gathered from online platforms such as Twitter has the potential to facilitate research over social phenomena based on sentiment analysis, which usually employs Natural Language Processing and Machine Learning techniques to interpret sentimental tendencies related to users’ opinions and make predictions about real events. Cyber-attacks are not isolated from opinion subjectivity on online social networks. Various security attacks are performed by hacker activists motivated by reactions from polemic social events. In this paper, a methodology for tracking social data that can trigger cyber-attacks is developed. Our main contribution lies in the monthly prediction of tweets with content related to security attacks and the incidents detected based on ℓ1 regularization. PMID:29710833
Predicting employees' well-being using work-family conflict and job strain models.
Karimi, Leila; Karimi, Hamidreza; Nouri, Aboulghassem
2011-04-01
The present study examined the effects of two models of work–family conflict (WFC) and job-strain on the job-related and context-free well-being of employees. The participants of the study consisted of Iranian employees from a variety of organizations. The effects of three dimensions of the job-strain model and six forms of WFC on affective well-being were assessed. The results of hierarchical multiple regression analysis revealed that the number of working hours, strain-based work interfering with family life (WIF) along with job characteristic variables (i.e. supervisory support, job demands and job control) all make a significant contribution to the prediction of job-related well-being. On the other hand, strain-based WIF and family interfering with work (FIW) significantly predicted context-free well-being. Implications are drawn and recommendations made regarding future research and interventions in the workplace.
Enhancing user experience by using multi-sensor data fusion to predict phone's luminance
NASA Astrophysics Data System (ADS)
Marhoubi, Asmaa H.
2017-09-01
The movement of a phone in an environment with different brightness, makes the luminance prediction challenging. The ambient light sensor takes time to modify the brightness of the screen based on the environment it is placed in. This causes an unsatisfactory user experience and delays in adjustment of the screen brightness. In this research, a method is proposed for enhancing the prediction of luminance using accelerometer, gyroscope and speed measurement technique. The speed of the phone is identified using Sum-of-Sine parameters. The lux values are then fused with the accelerometer and gyroscope data to present more accurate luminance values for the ALS based on the movement of the phone. An investigation is made during the movement of the user in a standard lighting environment. This enhances the user experience and improves the screen brightness precision. The accuracy has given an R-Square value of up to 0.97.
A New Stress-Based Model of Political Extremism
Canetti-Nisim, Daphna; Halperin, Eran; Sharvit, Keren; Hobfoll, Stevan E.
2011-01-01
Does exposure to terrorism lead to hostility toward minorities? Drawing on theories from clinical and social psychology, we propose a stress-based model of political extremism in which psychological distress—which is largely overlooked in political scholarship—and threat perceptions mediate the relationship between exposure to terrorism and attitudes toward minorities. To test the model, a representative sample of 469 Israeli Jewish respondents was interviewed on three occasions at six-month intervals. Structural Equation Modeling indicated that exposure to terrorism predicted psychological distress (t1), which predicted perceived threat from Palestinian citizens of Israel (t2), which, in turn, predicted exclusionist attitudes toward Palestinian citizens of Israel (t3). These findings provide solid evidence and a mechanism for the hypothesis that terrorism introduces nondemocratic attitudes threatening minority rights. It suggests that psychological distress plays an important role in political decision making and should be incorporated in models drawing upon political psychology. PMID:22140275
NASA Astrophysics Data System (ADS)
Jeong, Chang Yeol; Nam, Soo Woo; Lim, Jong Dae
2003-04-01
A new life prediction function based on a model formulated in terms of stress relaxation during hold time under creep-fatigue conditions is proposed. From the idea that reduction in fatigue life with hold is due to the creep effect of stress relaxation that results in additional energy dissipation in the hysteresis loop, it is suggested that the relaxed stress range may be a creep-fatigue damage function. Creep-fatigue data from the present and other investigators are used to check the validity of the proposed life prediction equation. It is shown that the data satisfy the applicability of the life relation model. Accordingly, using this life prediction model, one may realize that all the Coffin-Manson plots at various levels of hold time in strain-controlled creep-fatigue tests can be normalized to make one straight line.
Social Sentiment Sensor in Twitter for Predicting Cyber-Attacks Using ℓ₁ Regularization.
Hernandez-Suarez, Aldo; Sanchez-Perez, Gabriel; Toscano-Medina, Karina; Martinez-Hernandez, Victor; Perez-Meana, Hector; Olivares-Mercado, Jesus; Sanchez, Victor
2018-04-29
In recent years, online social media information has been the subject of study in several data science fields due to its impact on users as a communication and expression channel. Data gathered from online platforms such as Twitter has the potential to facilitate research over social phenomena based on sentiment analysis, which usually employs Natural Language Processing and Machine Learning techniques to interpret sentimental tendencies related to users’ opinions and make predictions about real events. Cyber-attacks are not isolated from opinion subjectivity on online social networks. Various security attacks are performed by hacker activists motivated by reactions from polemic social events. In this paper, a methodology for tracking social data that can trigger cyber-attacks is developed. Our main contribution lies in the monthly prediction of tweets with content related to security attacks and the incidents detected based on ℓ 1 regularization.
Wind power prediction based on genetic neural network
NASA Astrophysics Data System (ADS)
Zhang, Suhan
2017-04-01
The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.
Individual Differences in Base Rate Neglect: A Fuzzy Processing Preference Index
Wolfe, Christopher R.; Fisher, Christopher R.
2013-01-01
Little is known about individual differences in integrating numeric base-rates and qualitative text in making probability judgments. Fuzzy-Trace Theory predicts a preference for fuzzy processing. We conducted six studies to develop the FPPI, a reliable and valid instrument assessing individual differences in this fuzzy processing preference. It consists of 19 probability estimation items plus 4 "M-Scale" items that distinguish simple pattern matching from “base rate respect.” Cronbach's Alpha was consistently above 0.90. Validity is suggested by significant correlations between FPPI scores and three other measurers: "Rule Based" Process Dissociation Procedure scores; the number of conjunction fallacies in joint probability estimation; and logic index scores on syllogistic reasoning. Replicating norms collected in a university study with a web-based study produced negligible differences in FPPI scores, indicating robustness. The predicted relationships between individual differences in base rate respect and both conjunction fallacies and syllogistic reasoning were partially replicated in two web-based studies. PMID:23935255
Walia, Rasna R; Xue, Li C; Wilkins, Katherine; El-Manzalawy, Yasser; Dobbs, Drena; Honavar, Vasant
2014-01-01
Protein-RNA interactions are central to essential cellular processes such as protein synthesis and regulation of gene expression and play roles in human infectious and genetic diseases. Reliable identification of protein-RNA interfaces is critical for understanding the structural bases and functional implications of such interactions and for developing effective approaches to rational drug design. Sequence-based computational methods offer a viable, cost-effective way to identify putative RNA-binding residues in RNA-binding proteins. Here we report two novel approaches: (i) HomPRIP, a sequence homology-based method for predicting RNA-binding sites in proteins; (ii) RNABindRPlus, a new method that combines predictions from HomPRIP with those from an optimized Support Vector Machine (SVM) classifier trained on a benchmark dataset of 198 RNA-binding proteins. Although highly reliable, HomPRIP cannot make predictions for the unaligned parts of query proteins and its coverage is limited by the availability of close sequence homologs of the query protein with experimentally determined RNA-binding sites. RNABindRPlus overcomes these limitations. We compared the performance of HomPRIP and RNABindRPlus with that of several state-of-the-art predictors on two test sets, RB44 and RB111. On a subset of proteins for which homologs with experimentally determined interfaces could be reliably identified, HomPRIP outperformed all other methods achieving an MCC of 0.63 on RB44 and 0.83 on RB111. RNABindRPlus was able to predict RNA-binding residues of all proteins in both test sets, achieving an MCC of 0.55 and 0.37, respectively, and outperforming all other methods, including those that make use of structure-derived features of proteins. More importantly, RNABindRPlus outperforms all other methods for any choice of tradeoff between precision and recall. An important advantage of both HomPRIP and RNABindRPlus is that they rely on readily available sequence and sequence-derived features of RNA-binding proteins. A webserver implementation of both methods is freely available at http://einstein.cs.iastate.edu/RNABindRPlus/.
Predicting Individual Fuel Economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhenhong; Greene, David L
2011-01-01
To make informed decisions about travel and vehicle purchase, consumers need unbiased and accurate information of the fuel economy they will actually obtain. In the past, the EPA fuel economy estimates based on its 1984 rules have been widely criticized for overestimating on-road fuel economy. In 2008, EPA adopted a new estimation rule. This study compares the usefulness of the EPA's 1984 and 2008 estimates based on their prediction bias and accuracy and attempts to improve the prediction of on-road fuel economies based on consumer and vehicle attributes. We examine the usefulness of the EPA fuel economy estimates using amore » large sample of self-reported on-road fuel economy data and develop an Individualized Model for more accurately predicting an individual driver's on-road fuel economy based on easily determined vehicle and driver attributes. Accuracy rather than bias appears to have limited the usefulness of the EPA 1984 estimates in predicting on-road MPG. The EPA 2008 estimates appear to be equally inaccurate and substantially more biased relative to the self-reported data. Furthermore, the 2008 estimates exhibit an underestimation bias that increases with increasing fuel economy, suggesting that the new numbers will tend to underestimate the real-world benefits of fuel economy and emissions standards. By including several simple driver and vehicle attributes, the Individualized Model reduces the unexplained variance by over 55% and the standard error by 33% based on an independent test sample. The additional explanatory variables can be easily provided by the individuals.« less
van Engelen, S; Bovenhuis, H; Dijkstra, J; van Arendonk, J A M; Visker, M H P W
2015-11-01
Dairy cows produce enteric methane, a greenhouse gas with 25 times the global warming potential of CO2. Breeding could make a permanent, cumulative, and long-term contribution to methane reduction. Due to a lack of accurate, repeatable, individual methane measurements needed for breeding, indicators of methane production based on milk fatty acids have been proposed. The aim of the present study was to quantify the genetic variation for predicted methane yields. The milk fat composition of 1,905 first-lactation Dutch Holstein-Friesian cows was used to investigate 3 different predicted methane yields (g/kg of DMI): Methane1, Methane2, and Methane3. Methane1 was based on the milk fat proportions of C17:0anteiso, C18:1 rans-10+11, C18:1 cis-11, and C18:1 cis-13 (R(2)=0.73). Methane2 was based on C4:0, C18:0, C18:1 trans-10+11, and C18:1 cis-11 (R(2)=0.70). Methane3 was based on C4:0, C6:0, and C18:1 trans-10+11 (R(2)=0.63). Predicted methane yields were demonstrated to be heritable traits, with heritabilities between 0.12 and 0.44. Breeding can, thus, be used to decrease methane production predicted based on milk fatty acids. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Consistent prediction of GO protein localization.
Spetale, Flavio E; Arce, Debora; Krsticevic, Flavia; Bulacio, Pilar; Tapia, Elizabeth
2018-05-17
The GO-Cellular Component (GO-CC) ontology provides a controlled vocabulary for the consistent description of the subcellular compartments or macromolecular complexes where proteins may act. Current machine learning-based methods used for the automated GO-CC annotation of proteins suffer from the inconsistency of individual GO-CC term predictions. Here, we present FGGA-CC + , a class of hierarchical graph-based classifiers for the consistent GO-CC annotation of protein coding genes at the subcellular compartment or macromolecular complex levels. Aiming to boost the accuracy of GO-CC predictions, we make use of the protein localization knowledge in the GO-Biological Process (GO-BP) annotations to boost the accuracy of GO-CC prediction. As a result, FGGA-CC + classifiers are built from annotation data in both the GO-CC and GO-BP ontologies. Due to their graph-based design, FGGA-CC + classifiers are fully interpretable and their predictions amenable to expert analysis. Promising results on protein annotation data from five model organisms were obtained. Additionally, successful validation results in the annotation of a challenging subset of tandem duplicated genes in the tomato non-model organism were accomplished. Overall, these results suggest that FGGA-CC + classifiers can indeed be useful for satisfying the huge demand of GO-CC annotation arising from ubiquitous high throughout sequencing and proteomic projects.
Out-of-Home Placement Decision-Making and Outcomes in Child Welfare: A Longitudinal Study
McClelland, Gary M.; Weiner, Dana A.; Jordan, Neil; Lyons, John S.
2015-01-01
After children enter the child welfare system, subsequent out-of-home placement decisions and their impact on children’s well-being are complex and under-researched. This study examined two placement decision-making models: a multidisciplinary team approach, and a decision support algorithm using a standardized assessment. Based on 3,911 placement records in the Illinois child welfare system over 4 years, concordant (agreement) and discordant (disagreement) decisions between the two models were compared. Concordant decisions consistently predicted improvement in children’s well-being regardless of placement type. Discordant decisions showed greater variability. In general, placing children in settings less restrictive than the algorithm suggested (“under-placing”) was associated with less severe baseline functioning but also less improvement over time than placing children according to the algorithm. “Over-placing” children in settings more restrictive than the algorithm recommended was associated with more severe baseline functioning but fewer significant results in rate of improvement than predicted by concordant decisions. The importance of placement decision-making on policy, restrictiveness of placement, and delivery of treatments and services in child welfare are discussed. PMID:24677172
Ardoino, Ilaria; Lanzoni, Monica; Marano, Giuseppe; Boracchi, Patrizia; Sagrini, Elisabetta; Gianstefani, Alice; Piscaglia, Fabio; Biganzoli, Elia M
2017-04-01
The interpretation of regression models results can often benefit from the generation of nomograms, 'user friendly' graphical devices especially useful for assisting the decision-making processes. However, in the case of multinomial regression models, whenever categorical responses with more than two classes are involved, nomograms cannot be drawn in the conventional way. Such a difficulty in managing and interpreting the outcome could often result in a limitation of the use of multinomial regression in decision-making support. In the present paper, we illustrate the derivation of a non-conventional nomogram for multinomial regression models, intended to overcome this issue. Although it may appear less straightforward at first sight, the proposed methodology allows an easy interpretation of the results of multinomial regression models and makes them more accessible for clinicians and general practitioners too. Development of prediction model based on multinomial logistic regression and of the pertinent graphical tool is illustrated by means of an example involving the prediction of the extent of liver fibrosis in hepatitis C patients by routinely available markers.
Prediction of individual brain maturity using fMRI.
Dosenbach, Nico U F; Nardos, Binyam; Cohen, Alexander L; Fair, Damien A; Power, Jonathan D; Church, Jessica A; Nelson, Steven M; Wig, Gagan S; Vogel, Alecia C; Lessov-Schlaggar, Christina N; Barnes, Kelly Anne; Dubis, Joseph W; Feczko, Eric; Coalson, Rebecca S; Pruett, John R; Barch, Deanna M; Petersen, Steven E; Schlaggar, Bradley L
2010-09-10
Group functional connectivity magnetic resonance imaging (fcMRI) studies have documented reliable changes in human functional brain maturity over development. Here we show that support vector machine-based multivariate pattern analysis extracts sufficient information from fcMRI data to make accurate predictions about individuals' brain maturity across development. The use of only 5 minutes of resting-state fcMRI data from 238 scans of typically developing volunteers (ages 7 to 30 years) allowed prediction of individual brain maturity as a functional connectivity maturation index. The resultant functional maturation curve accounted for 55% of the sample variance and followed a nonlinear asymptotic growth curve shape. The greatest relative contribution to predicting individual brain maturity was made by the weakening of short-range functional connections between the adult brain's major functional networks.
Calibration and prediction of removal function in magnetorheological finishing.
Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng
2010-01-20
A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.
Improved nonlinear prediction method
NASA Astrophysics Data System (ADS)
Adenan, Nur Hamiza; Md Noorani, Mohd Salmi
2014-06-01
The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.
Development of an accident duration prediction model on the Korean Freeway Systems.
Chung, Younshik
2010-01-01
Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.
NASA Astrophysics Data System (ADS)
Werth, Alexandra; Liakat, Sabbir; Dong, Anqi; Woods, Callie M.; Gmachl, Claire F.
2018-05-01
An integrating sphere is used to enhance the collection of backscattered light in a noninvasive glucose sensor based on quantum cascade laser spectroscopy. The sphere enhances signal stability by roughly an order of magnitude, allowing us to use a thermoelectrically (TE) cooled detector while maintaining comparable glucose prediction accuracy levels. Using a smaller TE-cooled detector reduces form factor, creating a mobile sensor. Principal component analysis has predicted principal components of spectra taken from human subjects that closely match the absorption peaks of glucose. These principal components are used as regressors in a linear regression algorithm to make glucose concentration predictions, over 75% of which are clinically accurate.
Predictive Array Design. A method for sampling combinatorial chemistry library space.
Lipkin, M J; Rose, V S; Wood, J
2002-01-01
A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.
Dark matter candidate with well-defined mass and couplings
NASA Astrophysics Data System (ADS)
Allen, Roland
2017-01-01
There is as yet no confirmed and statistically significant evidence for direct, indirect, or collider-based detection of dark matter. However, several indirect searches, including AMS-02, Fermi-LAT, and PAMELA, have shown an intriguing excess of positrons when compared to expectations. Here we predict a Higgs-related but spin 1/2 dark matter candidate with a mass of 125 GeV. Since an initially reported 130 GeV gamma-ray excess has been abandoned by the full Fermi-LAT collaboration, this is a genuine prediction rather than postdiction. It would be consistent with a prediction of 125 GeV freshly-created positrons and antiprotons, but the complicated propagation of charged particles makes a comparison problematical.
NASA Astrophysics Data System (ADS)
Thompson, S. E.; Sivapalan, M.; Harman, C. J.; Srinivasan, V.; Hipsey, M. R.; Reed, P.; Montanari, A.; Blöschl, G.
2013-12-01
Globally, many different kinds of water resources management issues call for policy- and infrastructure-based responses. Yet responsible decision-making about water resources management raises a fundamental challenge for hydrologists: making predictions about water resources on decadal- to century-long timescales. Obtaining insight into hydrologic futures over 100 yr timescales forces researchers to address internal and exogenous changes in the properties of hydrologic systems. To do this, new hydrologic research must identify, describe and model feedbacks between water and other changing, coupled environmental subsystems. These models must be constrained to yield useful insights, despite the many likely sources of uncertainty in their predictions. Chief among these uncertainties are the impacts of the increasing role of human intervention in the global water cycle - a defining challenge for hydrology in the Anthropocene. Here we present a research agenda that proposes a suite of strategies to address these challenges from the perspectives of hydrologic science research. The research agenda focuses on the development of co-evolutionary hydrologic modeling to explore coupling across systems, and to address the implications of this coupling on the long-time behavior of the coupled systems. Three research directions support the development of these models: hydrologic reconstruction, comparative hydrology and model-data learning. These strategies focus on understanding hydrologic processes and feedbacks over long timescales, across many locations, and through strategic coupling of observational and model data in specific systems. We highlight the value of use-inspired and team-based science that is motivated by real-world hydrologic problems but targets improvements in fundamental understanding to support decision-making and management. Fully realizing the potential of this approach will ultimately require detailed integration of social science and physical science understanding of water systems, and is a priority for the developing field of sociohydrology.
Delayed discounting and hedonic hunger in the prediction of lab-based eating behavior.
Ely, Alice V; Howard, Janna; Lowe, Michael R
2015-12-01
Research suggests that characteristics identified in obese individuals, such as impulsive decision-making and hedonic hunger, may exist in nonobese populations. This study examined the independent and interactive effects of impulsive decision-making (measured via delay discounting, DD) and hedonic hunger (assessed with the Power of Food Scale, PFS) on food intake. Female participants (N=78) ate a self-determined amount of plain oatmeal, completed self-report measures and the delay discounting task, and participated in a sham taste test of palatable sweet and salty foods. Unexpectedly, PFS and DD scores interacted to predict consumption of the total amount of food consumed, and of oatmeal alone, but not of snack food alone. High-PFS participants consumed more when also high in DD, while low-PFS participants showed the opposite pattern of consumption. The findings identify variables that may increase propensity toward overconsumption and potential weight gain; future research is necessary to evaluate the utility of these constructs to predict increases in BMI over time. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bitter or not? BitterPredict, a tool for predicting taste from chemical structure.
Dagan-Wiener, Ayana; Nissim, Ido; Ben Abu, Natalie; Borgonovo, Gigliola; Bassoli, Angela; Niv, Masha Y
2017-09-21
Bitter taste is an innately aversive taste modality that is considered to protect animals from consuming toxic compounds. Yet, bitterness is not always noxious and some bitter compounds have beneficial effects on health. Hundreds of bitter compounds were reported (and are accessible via the BitterDB http://bitterdb.agri.huji.ac.il/dbbitter.php ), but numerous additional bitter molecules are still unknown. The dramatic chemical diversity of bitterants makes bitterness prediction a difficult task. Here we present a machine learning classifier, BitterPredict, which predicts whether a compound is bitter or not, based on its chemical structure. BitterDB was used as the positive set, and non-bitter molecules were gathered from literature to create the negative set. Adaptive Boosting (AdaBoost), based on decision trees machine-learning algorithm was applied to molecules that were represented using physicochemical and ADME/Tox descriptors. BitterPredict correctly classifies over 80% of the compounds in the hold-out test set, and 70-90% of the compounds in three independent external sets and in sensory test validation, providing a quick and reliable tool for classifying large sets of compounds into bitter and non-bitter groups. BitterPredict suggests that about 40% of random molecules, and a large portion (66%) of clinical and experimental drugs, and of natural products (77%) are bitter.
NASA Astrophysics Data System (ADS)
Reder, Alfredo; Rianna, Guido; Pagano, Luca
2018-02-01
In the field of rainfall-induced landslides on sloping covers, models for early warning predictions require an adequate trade-off between two aspects: prediction accuracy and timeliness. When a cover's initial hydrological state is a determining factor in triggering landslides, taking evaporative losses into account (or not) could significantly affect both aspects. This study evaluates the performance of three physically based predictive models, converting precipitation and evaporative fluxes into hydrological variables useful in assessing slope safety conditions. Two of the models incorporate evaporation, with one representing evaporation as both a boundary and internal phenomenon, and the other only a boundary phenomenon. The third model totally disregards evaporation. Model performances are assessed by analysing a well-documented case study involving a 2 m thick sloping volcanic cover. The large amount of monitoring data collected for the soil involved in the case study, reconstituted in a suitably equipped lysimeter, makes it possible to propose procedures for calibrating and validating the parameters of the models. All predictions indicate a hydrological singularity at the landslide time (alarm). A comparison of the models' predictions also indicates that the greater the complexity and completeness of the model, the lower the number of predicted hydrological singularities when no landslides occur (false alarms).
Consumer preference models: fuzzy theory approach
NASA Astrophysics Data System (ADS)
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Do Proxies for the Neurotransmitter Cortisol Predict Adaptation to Life with Chronic Pain?
NASA Astrophysics Data System (ADS)
Deamond, Wade
Among the numerous difficulties encountered by chronic pain patients, impulsive and dysfunctional decision-making complicate their already difficult life situations yet remains relatively understudied. This study examined a recently published neurobiological decision making model that identifies eight specific neurotransmitters and hormones (Dopamine, Testosterone, Endogenous Opioids Glutamate, Serotonin, Norepinephrine, Cortisol, and GABA) linked to unsound decision making related to cognitive, motivational and emotional dysregulation (Nussbaum et al., 2011) (see Appendix 2). The Perceived Stress Scale (PSS), a proxy for the cortisol element in the pharmacological decision making model was analyzed for the neurotransmitter's relationship to functionality and quality of life in a group of 37 chronic pain patients. Participants were comprised of males and females ranging from 23 to 52 years of age and were classified with respect to levels of adjustment to living with chronic pain based on the Quality of Life Scale (QLS), the Dartmouth WONCA COOP Charts and the Global Assessment of Functioning (GAF). The Iowa Gambling Task (IGT) and Frontal System Behavioral Scale (FSBS) measured decision making related to immediate gratification and daily living respectively. Results suggest that emotional dysregulation, as measured by the PSS is a significant predictor for adaptation to life with chronic pain and the PSS is superior to predicting adaptation to life with chronic pain than reported levels of pain as measured by the McGill Pain Questionnaire.
Boyle, Patricia A.; Yu, Lei; Wilson, Robert S.; Gamble, Keith; Buchman, Aron S.; Bennett, David A.
2012-01-01
Objective Decision making is an important determinant of health and well-being across the lifespan but is critical in aging, when many influential decisions are made just as cognitive function declines. Increasing evidence suggests that older adults, even those without dementia, often make poor decisions and are selectively vulnerable to scams. To date, however, the factors associated with poor decision making in old age are unknown. The objective of this study was to test the hypothesis that poor decision making is a consequence of cognitive decline among older persons without Alzheimer’s disease or mild cognitive impairment. Methods Participants were 420 non-demented persons from the Memory and Aging Project, a longitudinal, clinical-pathologic cohort study of aging in the Chicago metropolitan area. All underwent repeated cognitive evaluations and subsequently completed assessments of decision making and susceptibility to scams. Decision making was measured using 12 items from a previously established performance-based measure and a self-report measure of susceptibility to scams. Results Cognitive function data were collected over an average of 5.5 years prior to the decision making assessment. Regression analyses were used to examine whether the prior rate of cognitive decline predicted the level of decision making and susceptibility to scams; analyses controlled for age, sex, education, and starting level of cognition. Among 420 persons without dementia, more rapid cognitive decline predicted poorer decision making and increased susceptibility to scams (p’s<0.001). Further, the relations between cognitive decline, decision making and scams persisted in analyses restricted to persons without any cognitive impairment (i.e., no dementia or even mild cognitive impairment). Conclusions Poor decision making is a consequence of cognitive decline among older persons without Alzheimer’s disease or mild cognitive impairment, those widely considered “cognitively healthy.” These findings suggest that even very subtle age-related changes in cognition have detrimental effects on judgment. PMID:22916287
Connors, Brenda L; Rende, Richard; Colton, Timothy J
2013-01-01
There has been a surge of interest in examining the utility of methods for capturing individual differences in decision-making style. We illustrate the potential offered by Movement Pattern Analysis (MPA), an observational methodology that has been used in business and by the US Department of Defense to record body movements that provide predictive insight into individual differences in decision-making motivations and actions. Twelve military officers participated in an intensive 2-h interview that permitted detailed and fine-grained observation and coding of signature movements by trained practitioners using MPA. Three months later, these subjects completed four hypothetical decision-making tasks in which the amount of information sought out before coming to a decision, as well as the time spent on the tasks, were under the partial control of the subject. A composite MPA indicator of how a person allocates decision-making actions and motivations to balance both Assertion (exertion of tangible movement effort on the environment to make something occur) and Perspective (through movements that support shaping in the body to perceive and create a suitable viewpoint for action) was highly correlated with the total number of information draws and total response time-individuals high on Assertion reached for less information and had faster response times than those high on Perspective. Discussion focuses on the utility of using movement-based observational measures to capture individual differences in decision-making style and the implications for application in applied settings geared toward investigations of experienced leaders and world statesmen where individuality rules the day.
Connors, Brenda L.; Rende, Richard; Colton, Timothy J.
2013-01-01
There has been a surge of interest in examining the utility of methods for capturing individual differences in decision-making style. We illustrate the potential offered by Movement Pattern Analysis (MPA), an observational methodology that has been used in business and by the US Department of Defense to record body movements that provide predictive insight into individual differences in decision-making motivations and actions. Twelve military officers participated in an intensive 2-h interview that permitted detailed and fine-grained observation and coding of signature movements by trained practitioners using MPA. Three months later, these subjects completed four hypothetical decision-making tasks in which the amount of information sought out before coming to a decision, as well as the time spent on the tasks, were under the partial control of the subject. A composite MPA indicator of how a person allocates decision-making actions and motivations to balance both Assertion (exertion of tangible movement effort on the environment to make something occur) and Perspective (through movements that support shaping in the body to perceive and create a suitable viewpoint for action) was highly correlated with the total number of information draws and total response time—individuals high on Assertion reached for less information and had faster response times than those high on Perspective. Discussion focuses on the utility of using movement-based observational measures to capture individual differences in decision-making style and the implications for application in applied settings geared toward investigations of experienced leaders and world statesmen where individuality rules the day. PMID:24069012