Sample records for probabilistic selection task

  1. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students

    NASA Astrophysics Data System (ADS)

    Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi

    2017-08-01

    Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.

  2. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  3. Supervised Extraction of Diagnosis Codes from EMRs: Role of Feature Selection, Data Selection, and Probabilistic Thresholding.

    PubMed

    Rios, Anthony; Kavuluru, Ramakanth

    2013-09-01

    Extracting diagnosis codes from medical records is a complex task carried out by trained coders by reading all the documents associated with a patient's visit. With the popularity of electronic medical records (EMRs), computational approaches to code extraction have been proposed in the recent years. Machine learning approaches to multi-label text classification provide an important methodology in this task given each EMR can be associated with multiple codes. In this paper, we study the the role of feature selection, training data selection, and probabilistic threshold optimization in improving different multi-label classification approaches. We conduct experiments based on two different datasets: a recent gold standard dataset used for this task and a second larger and more complex EMR dataset we curated from the University of Kentucky Medical Center. While conventional approaches achieve results comparable to the state-of-the-art on the gold standard dataset, on our complex in-house dataset, we show that feature selection, training data selection, and probabilistic thresholding provide significant gains in performance.

  4. Sequence Learning and Selection Difficulty

    ERIC Educational Resources Information Center

    Rowland, Lee A.; Shanks, David R.

    2006-01-01

    The authors studied the role of attention as a selection mechanism in implicit learning by examining the effect on primary sequence learning of performing a demanding target-selection task. Participants were trained on probabilistic sequences in a novel version of the serial reaction time (SRT) task, with dual- and triple-stimulus participants…

  5. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    PubMed

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  6. Processing of probabilistic information in weight perception and motor prediction.

    PubMed

    Trampenau, Leif; van Eimeren, Thilo; Kuhtz-Buschbeck, Johann

    2017-02-01

    We studied the effects of probabilistic cues, i.e., of information of limited certainty, in the context of an action task (GL: grip-lift) and of a perceptual task (WP: weight perception). Normal subjects (n = 22) saw four different probabilistic visual cues, each of which announced the likely weight of an object. In the GL task, the object was grasped and lifted with a pinch grip, and the peak force rates indicated that the grip and load forces were scaled predictively according to the probabilistic information. The WP task provided the expected heaviness related to each probabilistic cue; the participants gradually adjusted the object's weight until its heaviness matched the expected weight for a given cue. Subjects were randomly assigned to two groups: one started with the GL task and the other one with the WP task. The four different probabilistic cues influenced weight adjustments in the WP task and peak force rates in the GL task in a similar manner. The interpretation and utilization of the probabilistic information was critically influenced by the initial task. Participants who started with the WP task classified the four probabilistic cues into four distinct categories and applied these categories to the subsequent GL task. On the other side, participants who started with the GL task applied three distinct categories to the four cues and retained this classification in the following WP task. The initial strategy, once established, determined the way how the probabilistic information was interpreted and implemented.

  7. Rats bred for high alcohol drinking are more sensitive to delayed and probabilistic outcomes.

    PubMed

    Wilhelm, C J; Mitchell, S H

    2008-10-01

    Alcoholics and heavy drinkers score higher on measures of impulsivity than nonalcoholics and light drinkers. This may be because of factors that predate drug exposure (e.g. genetics). This study examined the role of genetics by comparing impulsivity measures in ethanol-naive rats selectively bred based on their high [high alcohol drinking (HAD)] or low [low alcohol drinking (LAD)] consumption of ethanol. Replicates 1 and 2 of the HAD and LAD rats, developed by the University of Indiana Alcohol Research Center, completed two different discounting tasks. Delay discounting examines sensitivity to rewards that are delayed in time and is commonly used to assess 'choice' impulsivity. Probability discounting examines sensitivity to the uncertain delivery of rewards and has been used to assess risk taking and risk assessment. High alcohol drinking rats discounted delayed and probabilistic rewards more steeply than LAD rats. Discount rates associated with probabilistic and delayed rewards were weakly correlated, while bias was strongly correlated with discount rate in both delay and probability discounting. The results suggest that selective breeding for high alcohol consumption selects for animals that are more sensitive to delayed and probabilistic outcomes. Sensitivity to delayed or probabilistic outcomes may be predictive of future drinking in genetically predisposed individuals.

  8. Feedback-based probabilistic category learning is selectively impaired in attention/hyperactivity deficit disorder.

    PubMed

    Gabay, Yafit; Goldfarb, Liat

    2017-07-01

    Although Attention-Deficit Hyperactivity Disorder (ADHD) is closely linked to executive function deficits, it has recently been attributed to procedural learning impairments that are quite distinct from the former. These observations challenge the ability of the executive function framework solely to account for the diverse range of symptoms observed in ADHD. A recent neurocomputational model emphasizes the role of striatal dopamine (DA) in explaining ADHD's broad range of deficits, but the link between this model and procedural learning impairments remains unclear. Significantly, feedback-based procedural learning is hypothesized to be disrupted in ADHD because of the involvement of striatal DA in this type of learning. In order to test this assumption, we employed two variants of a probabilistic category learning task known from the neuropsychological literature. Feedback-based (FB) and paired associate-based (PA) probabilistic category learning were employed in a non-medicated sample of ADHD participants and neurotypical participants. In the FB task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of the response. In the PA learning task, participants viewed the cue and its associated outcome simultaneously without receiving an overt response or corrective feedback. In both tasks, participants were trained across 150 trials. Learning was assessed in a subsequent test without a presentation of the outcome or corrective feedback. Results revealed an interesting disassociation in which ADHD participants performed as well as control participants in the PA task, but were impaired compared with the controls in the FB task. The learning curve during FB training differed between the two groups. Taken together, these results suggest that the ability to incrementally learn by feedback is selectively disrupted in ADHD participants. These results are discussed in relation to both the ADHD dopaminergic dysfunction model and recent findings implicating procedural learning impairments in those with ADHD. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Using Deep Learning for Compound Selectivity Prediction.

    PubMed

    Zhang, Ruisheng; Li, Juan; Lu, Jingjing; Hu, Rongjing; Yuan, Yongna; Zhao, Zhili

    2016-01-01

    Compound selectivity prediction plays an important role in identifying potential compounds that bind to the target of interest with high affinity. However, there is still short of efficient and accurate computational approaches to analyze and predict compound selectivity. In this paper, we propose two methods to improve the compound selectivity prediction. We employ an improved multitask learning method in Neural Networks (NNs), which not only incorporates both activity and selectivity for other targets, but also uses a probabilistic classifier with a logistic regression. We further improve the compound selectivity prediction by using the multitask learning method in Deep Belief Networks (DBNs) which can build a distributed representation model and improve the generalization of the shared tasks. In addition, we assign different weights to the auxiliary tasks that are related to the primary selectivity prediction task. In contrast to other related work, our methods greatly improve the accuracy of the compound selectivity prediction, in particular, using the multitask learning in DBNs with modified weights obtains the best performance.

  10. Deductive and inductive reasoning in obsessive-compulsive disorder.

    PubMed

    Pélissier, Marie-Claude; O'Connor, Kieron P

    2002-03-01

    This study tested the hypothesis that people with obsessive-compulsive disorder (OCD) show an inductive reasoning style distinct from people with generalized anxiety disorder (GAD) and from participants in a non-anxious (NA) control group. The experimental procedure consisted of administering a range of six deductive and inductive tasks and a probabilistic task in order to compare reasoning processes between groups. Recruitment was in the Montreal area within a French-speaking population. The participants were 12 people with OCD, 12 NA controls and 10 people with GAD. Participants completed a series of written and oral reasoning tasks including the Wason Selection Task, a Bayesian probability task and other inductive tasks, designed by the authors. There were no differences between groups in deductive reasoning. On an inductive "bridging task", the participants with OCD always took longer than the NA control and GAD groups to infer a link between two statements and to elaborate on this possible link. The OCD group alone showed a significant decrease in their degree of conviction about an arbitrary statement after inductively generating reasons to support this statement. Differences in probabilistic reasoning replicated those of previous authors. The results pinpoint the importance of examining inference processes in people with OCD in order to further refine the clinical applications of behavioural-cognitive therapy for this disorder.

  11. Probabilistic Category Learning in Developmental Dyslexia: Evidence from Feedback and Paired-Associate Weather Prediction Tasks

    PubMed Central

    Gabay, Yafit; Vakil, Eli; Schiff, Rachel; Holt, Lori L.

    2015-01-01

    Objective Developmental dyslexia is presumed to arise from specific phonological impairments. However, an emerging theoretical framework suggests that phonological impairments may be symptoms stemming from an underlying dysfunction of procedural learning. Method We tested procedural learning in adults with dyslexia (n=15) and matched-controls (n=15) using two versions of the Weather Prediction Task: Feedback (FB) and Paired-associate (PA). In the FB-based task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of response. In the PA-based learning task, participants viewed the cue and its associated outcome simultaneously without overt response or feedback. In both versions, participants trained across 150 trials. Learning was assessed in a subsequent test without presentation of the outcome, or corrective feedback. Results The Dyslexia group exhibited impaired learning compared with the Control group on both the FB and PA versions of the weather prediction task. Conclusions The results indicate that the ability to learn by feedback is not selectively impaired in dyslexia. Rather it seems that the probabilistic nature of the task, shared by the FB and PA versions of the weather prediction task, hampers learning in those with dyslexia. Results are discussed in light of procedural learning impairments among participants with dyslexia. PMID:25730732

  12. Reasoning about Probabilistic Security Using Task-PIOAs

    NASA Astrophysics Data System (ADS)

    Jaggard, Aaron D.; Meadows, Catherine; Mislove, Michael; Segala, Roberto

    Task-structured probabilistic input/output automata (Task-PIOAs) are concurrent probabilistic automata that, among other things, have been used to provide a formal framework for the universal composability paradigms of protocol security. One of their advantages is that that they allow one to distinguish high-level nondeterminism that can affect the outcome of the protocol, from low-level choices, which can't. We present an alternative approach to analyzing the structure of Task-PIOAs that relies on ordered sets. We focus on two of the components that are required to define and apply Task-PIOAs: discrete probability theory and automata theory. We believe our development gives insight into the structure of Task-PIOAs and how they can be utilized to model crypto-protocols. We illustrate our approach with an example from anonymity, an area that has not previously been addressed using Task-PIOAs. We model Chaum's Dining Cryptographers Protocol at a level that does not require cryptographic primitives in the analysis. We show via this example how our approach can leverage a proof of security in the case a principal behaves deterministically to prove security when that principal behaves probabilistically.

  13. Frontal and Parietal Contributions to Probabilistic Association Learning

    PubMed Central

    Rushby, Jacqueline A.; Vercammen, Ans; Loo, Colleen; Short, Brooke

    2011-01-01

    Neuroimaging studies have shown both dorsolateral prefrontal (DLPFC) and inferior parietal cortex (iPARC) activation during probabilistic association learning. Whether these cortical brain regions are necessary for probabilistic association learning is presently unknown. Participants' ability to acquire probabilistic associations was assessed during disruptive 1 Hz repetitive transcranial magnetic stimulation (rTMS) of the left DLPFC, left iPARC, and sham using a crossover single-blind design. On subsequent sessions, performance improved relative to baseline except during DLPFC rTMS that disrupted the early acquisition beneficial effect of prior exposure. A second experiment examining rTMS effects on task-naive participants showed that neither DLPFC rTMS nor sham influenced naive acquisition of probabilistic associations. A third experiment examining consecutive administration of the probabilistic association learning test revealed early trial interference from previous exposure to different probability schedules. These experiments, showing disrupted acquisition of probabilistic associations by rTMS only during subsequent sessions with an intervening night's sleep, suggest that the DLPFC may facilitate early access to learned strategies or prior task-related memories via consolidation. Although neuroimaging studies implicate DLPFC and iPARC in probabilistic association learning, the present findings suggest that early acquisition of the probabilistic cue-outcome associations in task-naive participants is not dependent on either region. PMID:21216842

  14. Memory Indexing: A Novel Method for Tracing Memory Processes in Complex Cognitive Tasks

    ERIC Educational Resources Information Center

    Renkewitz, Frank; Jahn, Georg

    2012-01-01

    We validate an eye-tracking method applicable for studying memory processes in complex cognitive tasks. The method is tested with a task on probabilistic inferences from memory. It provides valuable data on the time course of processing, thus clarifying previous results on heuristic probabilistic inference. Participants learned cue values of…

  15. Exploration of Advanced Probabilistic and Stochastic Design Methods

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    2003-01-01

    The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and tutorials are attached in electronic form with the enclosed CD.

  16. Improved probabilistic inference as a general learning mechanism with action video games.

    PubMed

    Green, C Shawn; Pouget, Alexandre; Bavelier, Daphne

    2010-09-14

    Action video game play benefits performance in an array of sensory, perceptual, and attentional tasks that go well beyond the specifics of game play [1-9]. That a training regimen may induce improvements in so many different skills is notable because the majority of studies on training-induced learning report improvements on the trained task but limited transfer to other, even closely related, tasks ([10], but see also [11-13]). Here we ask whether improved probabilistic inference may explain such broad transfer. By using a visual perceptual decision making task [14, 15], the present study shows for the first time that action video game experience does indeed improve probabilistic inference. A neural model of this task [16] establishes how changing a single parameter, namely the strength of the connections between the neural layer providing the momentary evidence and the layer integrating the evidence over time, captures improvements in action-gamers behavior. These results were established in a visual, but also in a novel auditory, task, indicating generalization across modalities. Thus, improved probabilistic inference provides a general mechanism for why action video game playing enhances performance in a wide variety of tasks. In addition, this mechanism may serve as a signature of training regimens that are likely to produce transfer of learning. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. A Flexible Mechanism of Rule Selection Enables Rapid Feature-Based Reinforcement Learning

    PubMed Central

    Balcarras, Matthew; Womelsdorf, Thilo

    2016-01-01

    Learning in a new environment is influenced by prior learning and experience. Correctly applying a rule that maps a context to stimuli, actions, and outcomes enables faster learning and better outcomes compared to relying on strategies for learning that are ignorant of task structure. However, it is often difficult to know when and how to apply learned rules in new contexts. In our study we explored how subjects employ different strategies for learning the relationship between stimulus features and positive outcomes in a probabilistic task context. We test the hypothesis that task naive subjects will show enhanced learning of feature specific reward associations by switching to the use of an abstract rule that associates stimuli by feature type and restricts selections to that dimension. To test this hypothesis we designed a decision making task where subjects receive probabilistic feedback following choices between pairs of stimuli. In the task, trials are grouped in two contexts by blocks, where in one type of block there is no unique relationship between a specific feature dimension (stimulus shape or color) and positive outcomes, and following an un-cued transition, alternating blocks have outcomes that are linked to either stimulus shape or color. Two-thirds of subjects (n = 22/32) exhibited behavior that was best fit by a hierarchical feature-rule model. Supporting the prediction of the model mechanism these subjects showed significantly enhanced performance in feature-reward blocks, and rapidly switched their choice strategy to using abstract feature rules when reward contingencies changed. Choice behavior of other subjects (n = 10/32) was fit by a range of alternative reinforcement learning models representing strategies that do not benefit from applying previously learned rules. In summary, these results show that untrained subjects are capable of flexibly shifting between behavioral rules by leveraging simple model-free reinforcement learning and context-specific selections to drive responses. PMID:27064794

  18. Probabilistic brains: knowns and unknowns

    PubMed Central

    Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E

    2015-01-01

    There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561

  19. The composite load spectra project

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H.; Kurth, R. E.

    1990-01-01

    Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.

  20. Modeling the Evolution of Beliefs Using an Attentional Focus Mechanism

    PubMed Central

    Marković, Dimitrije; Gläscher, Jan; Bossaerts, Peter; O’Doherty, John; Kiebel, Stefan J.

    2015-01-01

    For making decisions in everyday life we often have first to infer the set of environmental features that are relevant for the current task. Here we investigated the computational mechanisms underlying the evolution of beliefs about the relevance of environmental features in a dynamical and noisy environment. For this purpose we designed a probabilistic Wisconsin card sorting task (WCST) with belief solicitation, in which subjects were presented with stimuli composed of multiple visual features. At each moment in time a particular feature was relevant for obtaining reward, and participants had to infer which feature was relevant and report their beliefs accordingly. To test the hypothesis that attentional focus modulates the belief update process, we derived and fitted several probabilistic and non-probabilistic behavioral models, which either incorporate a dynamical model of attentional focus, in the form of a hierarchical winner-take-all neuronal network, or a diffusive model, without attention-like features. We used Bayesian model selection to identify the most likely generative model of subjects’ behavior and found that attention-like features in the behavioral model are essential for explaining subjects’ responses. Furthermore, we demonstrate a method for integrating both connectionist and Bayesian models of decision making within a single framework that allowed us to infer hidden belief processes of human subjects. PMID:26495984

  1. Perceptual-motor skill learning in Gilles de la Tourette syndrome. Evidence for multiple procedural learning and memory systems.

    PubMed

    Marsh, Rachel; Alexander, Gerianne M; Packard, Mark G; Zhu, Hongtu; Peterson, Bradley S

    2005-01-01

    Procedural learning and memory systems likely comprise several skills that are differentially affected by various illnesses of the central nervous system, suggesting their relative functional independence and reliance on differing neural circuits. Gilles de la Tourette syndrome (GTS) is a movement disorder that involves disturbances in the structure and function of the striatum and related circuitry. Recent studies suggest that patients with GTS are impaired in performance of a probabilistic classification task that putatively involves the acquisition of stimulus-response (S-R)-based habits. Assessing the learning of perceptual-motor skills and probabilistic classification in the same samples of GTS and healthy control subjects may help to determine whether these various forms of procedural (habit) learning rely on the same or differing neuroanatomical substrates and whether those substrates are differentially affected in persons with GTS. Therefore, we assessed perceptual-motor skill learning using the pursuit-rotor and mirror tracing tasks in 50 patients with GTS and 55 control subjects who had previously been compared at learning a task of probabilistic classifications. The GTS subjects did not differ from the control subjects in performance of either the pursuit rotor or mirror-tracing tasks, although they were significantly impaired in the acquisition of a probabilistic classification task. In addition, learning on the perceptual-motor tasks was not correlated with habit learning on the classification task in either the GTS or healthy control subjects. These findings suggest that the differing forms of procedural learning are dissociable both functionally and neuroanatomically. The specific deficits in the probabilistic classification form of habit learning in persons with GTS are likely to be a consequence of disturbances in specific corticostriatal circuits, but not the same circuits that subserve the perceptual-motor form of habit learning.

  2. Magnetic Tunnel Junction Mimics Stochastic Cortical Spiking Neurons

    NASA Astrophysics Data System (ADS)

    Sengupta, Abhronil; Panda, Priyadarshini; Wijesinghe, Parami; Kim, Yusung; Roy, Kaushik

    2016-07-01

    Brain-inspired computing architectures attempt to mimic the computations performed in the neurons and the synapses in the human brain in order to achieve its efficiency in learning and cognitive tasks. In this work, we demonstrate the mapping of the probabilistic spiking nature of pyramidal neurons in the cortex to the stochastic switching behavior of a Magnetic Tunnel Junction in presence of thermal noise. We present results to illustrate the efficiency of neuromorphic systems based on such probabilistic neurons for pattern recognition tasks in presence of lateral inhibition and homeostasis. Such stochastic MTJ neurons can also potentially provide a direct mapping to the probabilistic computing elements in Belief Networks for performing regenerative tasks.

  3. Adolescents' Heightened Risk-Seeking in a Probabilistic Gambling Task

    ERIC Educational Resources Information Center

    Burnett, Stephanie; Bault, Nadege; Coricelli, Giorgio; Blakemore, Sarah-Jayne

    2010-01-01

    This study investigated adolescent males' decision-making under risk, and the emotional response to decision outcomes, using a probabilistic gambling task designed to evoke counterfactually mediated emotions (relief and regret). Participants were 20 adolescents (aged 9-11), 26 young adolescents (aged 12-15), 20 mid-adolescents (aged 15-18) and 17…

  4. Young children do not succeed in choice tasks that imply evaluating chances.

    PubMed

    Girotto, Vittorio; Fontanari, Laura; Gonzalez, Michel; Vallortigara, Giorgio; Blaye, Agnès

    2016-07-01

    Preverbal infants manifest probabilistic intuitions in their reactions to the outcomes of simple physical processes and in their choices. Their ability conflicts with the evidence that, before the age of about 5years, children's verbal judgments do not reveal probability understanding. To assess these conflicting results, three studies tested 3-5-year-olds on choice tasks on which infants perform successfully. The results showed that children of all age groups made optimal choices in tasks that did not require forming probabilistic expectations. In probabilistic tasks, however, only 5-year-olds made optimal choices. Younger children performed at random and/or were guided by superficial heuristics. These results suggest caution in interpreting infants' ability to evaluate chance, and indicate that the development of this ability may not follow a linear trajectory. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. I Plan Therefore I Choose: Free-Choice Bias Due to Prior Action-Probability but Not Action-Value

    PubMed Central

    Suriya-Arunroj, Lalitta; Gail, Alexander

    2015-01-01

    According to an emerging view, decision-making, and motor planning are tightly entangled at the level of neural processing. Choice is influenced not only by the values associated with different options, but also biased by other factors. Here we test the hypothesis that preliminary action planning can induce choice biases gradually and independently of objective value when planning overlaps with one of the potential action alternatives. Subjects performed center-out reaches obeying either a clockwise or counterclockwise cue-response rule in two tasks. In the probabilistic task, a pre-cue indicated the probability of each of the two potential rules to become valid. When the subsequent rule-cue unambiguously indicated which of the pre-cued rules was actually valid (instructed trials), subjects responded faster to rules pre-cued with higher probability. When subjects were allowed to choose freely between two equally rewarded rules (choice trials) they chose the originally more likely rule more often and faster, despite the lack of an objective advantage in selecting this target. In the amount task, the pre-cue indicated the amount of potential reward associated with each rule. Subjects responded faster to rules pre-cued with higher reward amount in instructed trials of the amount task, equivalent to the more likely rule in the probabilistic task. Yet, in contrast, subjects showed hardly any choice bias and no increase in response speed in favor of the original high-reward target in the choice trials of the amount task. We conclude that free-choice behavior is robustly biased when predictability encourages the planning of one of the potential responses, while prior reward expectations without action planning do not induce such strong bias. Our results provide behavioral evidence for distinct contributions of expected value and action planning in decision-making and a tight interdependence of motor planning and action selection, supporting the idea that the underlying neural mechanisms overlap. PMID:26635565

  6. A Joint Gaussian Process Model for Active Visual Recognition with Expertise Estimation in Crowdsourcing

    PubMed Central

    Long, Chengjiang; Hua, Gang; Kapoor, Ashish

    2015-01-01

    We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892

  7. Probabilistic Motor Sequence Yields Greater Offline and Less Online Learning than Fixed Sequence

    PubMed Central

    Du, Yue; Prashad, Shikha; Schoenbrun, Ilana; Clark, Jane E.

    2016-01-01

    It is well acknowledged that motor sequences can be learned quickly through online learning. Subsequently, the initial acquisition of a motor sequence is boosted or consolidated by offline learning. However, little is known whether offline learning can drive the fast learning of motor sequences (i.e., initial sequence learning in the first training session). To examine offline learning in the fast learning stage, we asked four groups of young adults to perform the serial reaction time (SRT) task with either a fixed or probabilistic sequence and with or without preliminary knowledge (PK) of the presence of a sequence. The sequence and PK were manipulated to emphasize either procedural (probabilistic sequence; no preliminary knowledge (NPK)) or declarative (fixed sequence; with PK) memory that were found to either facilitate or inhibit offline learning. In the SRT task, there were six learning blocks with a 2 min break between each consecutive block. Throughout the session, stimuli followed the same fixed or probabilistic pattern except in Block 5, in which stimuli appeared in a random order. We found that PK facilitated the learning of a fixed sequence, but not a probabilistic sequence. In addition to overall learning measured by the mean reaction time (RT), we examined the progressive changes in RT within and between blocks (i.e., online and offline learning, respectively). It was found that the two groups who performed the fixed sequence, regardless of PK, showed greater online learning than the other two groups who performed the probabilistic sequence. The groups who performed the probabilistic sequence, regardless of PK, did not display online learning, as indicated by a decline in performance within the learning blocks. However, they did demonstrate remarkably greater offline improvement in RT, which suggests that they are learning the probabilistic sequence offline. These results suggest that in the SRT task, the fast acquisition of a motor sequence is driven by concurrent online and offline learning. In addition, as the acquisition of a probabilistic sequence requires greater procedural memory compared to the acquisition of a fixed sequence, our results suggest that offline learning is more likely to take place in a procedural sequence learning task. PMID:26973502

  8. Probabilistic Motor Sequence Yields Greater Offline and Less Online Learning than Fixed Sequence.

    PubMed

    Du, Yue; Prashad, Shikha; Schoenbrun, Ilana; Clark, Jane E

    2016-01-01

    It is well acknowledged that motor sequences can be learned quickly through online learning. Subsequently, the initial acquisition of a motor sequence is boosted or consolidated by offline learning. However, little is known whether offline learning can drive the fast learning of motor sequences (i.e., initial sequence learning in the first training session). To examine offline learning in the fast learning stage, we asked four groups of young adults to perform the serial reaction time (SRT) task with either a fixed or probabilistic sequence and with or without preliminary knowledge (PK) of the presence of a sequence. The sequence and PK were manipulated to emphasize either procedural (probabilistic sequence; no preliminary knowledge (NPK)) or declarative (fixed sequence; with PK) memory that were found to either facilitate or inhibit offline learning. In the SRT task, there were six learning blocks with a 2 min break between each consecutive block. Throughout the session, stimuli followed the same fixed or probabilistic pattern except in Block 5, in which stimuli appeared in a random order. We found that PK facilitated the learning of a fixed sequence, but not a probabilistic sequence. In addition to overall learning measured by the mean reaction time (RT), we examined the progressive changes in RT within and between blocks (i.e., online and offline learning, respectively). It was found that the two groups who performed the fixed sequence, regardless of PK, showed greater online learning than the other two groups who performed the probabilistic sequence. The groups who performed the probabilistic sequence, regardless of PK, did not display online learning, as indicated by a decline in performance within the learning blocks. However, they did demonstrate remarkably greater offline improvement in RT, which suggests that they are learning the probabilistic sequence offline. These results suggest that in the SRT task, the fast acquisition of a motor sequence is driven by concurrent online and offline learning. In addition, as the acquisition of a probabilistic sequence requires greater procedural memory compared to the acquisition of a fixed sequence, our results suggest that offline learning is more likely to take place in a procedural sequence learning task.

  9. Learning to choose: Cognitive aging and strategy selection learning in decision making.

    PubMed

    Mata, Rui; von Helversen, Bettina; Rieskamp, Jörg

    2010-06-01

    Decision makers often have to learn from experience. In these situations, people must use the available feedback to select the appropriate decision strategy. How does the ability to select decision strategies on the basis of experience change with age? We examined younger and older adults' strategy selection learning in a probabilistic inference task using a computational model of strategy selection learning. Older adults showed poorer decision performance compared with younger adults. In particular, older adults performed poorly in an environment favoring the use of a more cognitively demanding strategy. The results suggest that the impact of cognitive aging on strategy selection learning depends on the structure of the decision environment. (c) 2010 APA, all rights reserved

  10. Selective effects of 5-HT2C receptor modulation on performance of a novel valence-probe visual discrimination task and probabilistic reversal learning in mice.

    PubMed

    Phillips, Benjamin U; Dewan, Sigma; Nilsson, Simon R O; Robbins, Trevor W; Heath, Christopher J; Saksida, Lisa M; Bussey, Timothy J; Alsiö, Johan

    2018-04-22

    Dysregulation of the serotonin (5-HT) system is a pathophysiological component in major depressive disorder (MDD), a condition closely associated with abnormal emotional responsivity to positive and negative feedback. However, the precise mechanism through which 5-HT tone biases feedback responsivity remains unclear. 5-HT2C receptors (5-HT2CRs) are closely linked with aspects of depressive symptomatology, including abnormalities in reinforcement processes and response to stress. Thus, we aimed to determine the impact of 5-HT2CR function on response to feedback in biased reinforcement learning. We used two touchscreen assays designed to assess the impact of positive and negative feedback on probabilistic reinforcement in mice, including a novel valence-probe visual discrimination (VPVD) and a probabilistic reversal learning procedure (PRL). Systemic administration of a 5-HT2CR agonist and antagonist resulted in selective changes in the balance of feedback sensitivity bias on these tasks. Specifically, on VPVD, SB 242084, the 5-HT2CR antagonist, impaired acquisition of a discrimination dependent on appropriate integration of positive and negative feedback. On PRL, SB 242084 at 1 mg/kg resulted in changes in behaviour consistent with reduced sensitivity to positive feedback. In contrast, WAY 163909, the 5-HT2CR agonist, resulted in changes associated with increased sensitivity to positive feedback and decreased sensitivity to negative feedback. These results suggest that 5-HT2CRs tightly regulate feedback sensitivity bias in mice with consequent effects on learning and cognitive flexibility and specify a framework for the influence of 5-HT2CRs on sensitivity to reinforcement.

  11. The cerebellum and decision making under uncertainty.

    PubMed

    Blackwood, Nigel; Ffytche, Dominic; Simmons, Andrew; Bentall, Richard; Murray, Robin; Howard, Robert

    2004-06-01

    This study aimed to identify the neural basis of probabilistic reasoning, a type of inductive inference that aids decision making under conditions of uncertainty. Eight normal subjects performed two separate two-alternative-choice tasks (the balls in a bottle and personality survey tasks) while undergoing functional magnetic resonance imaging (fMRI). The experimental conditions within each task were chosen so that they differed only in their requirement to make a decision under conditions of uncertainty (probabilistic reasoning and frequency determination required) or under conditions of certainty (frequency determination required). The same visual stimuli and motor responses were used in the experimental conditions. We provide evidence that the neo-cerebellum, in conjunction with the premotor cortex, inferior parietal lobule and medial occipital cortex, mediates the probabilistic inferences that guide decision making under uncertainty. We hypothesise that the neo-cerebellum constructs internal working models of uncertain events in the external world, and that such probabilistic models subserve the predictive capacity central to induction. Copyright 2004 Elsevier B.V.

  12. Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model.

    PubMed

    Stocco, Andrea; Murray, Nicole L; Yamasaki, Brianna L; Renno, Taylor J; Nguyen, Jimmy; Prat, Chantel S

    2017-07-01

    Cognitive control is thought to be made possible by the activity of the prefrontal cortex, which selectively uses task-specific representations to bias the selection of task-appropriate responses over more automated, but inappropriate, ones. Recent models have suggested, however, that prefrontal representations are in turn controlled by the basal ganglia. In particular, neurophysiological considerations suggest that the basal ganglia's indirect pathway plays a pivotal role in preventing irrelevant information from being incorporated into a task, thus reducing response interference due to the processing of inappropriate stimuli dimensions. Here, we test this hypothesis by showing that individual differences in a non-verbal cognitive control task (the Simon task) are correlated with performance on a decision-making task (the Probabilistic Stimulus Selection task) that tracks the contribution of the indirect pathway. Specifically, the higher the effect of the indirect pathway, the smaller was the behavioral costs associated with suppressing interference in incongruent trials. Additionally, it was found that this correlation was driven by individual differences in incongruent trials only (with little effect on congruent ones) and specific to the indirect pathway (with almost no correlation with the effect of the direct pathways). Finally, it is shown that this pattern of results is precisely what is predicted when competitive dynamics of the basal ganglia are added to the selective attention component of a simple model of the Simon task, thus showing that our experimental results can be fully explained by our initial hypothesis. Published by Elsevier B.V.

  13. Stimulus discriminability may bias value-based probabilistic learning.

    PubMed

    Schutte, Iris; Slagter, Heleen A; Collins, Anne G E; Frank, Michael J; Kenemans, J Leon

    2017-01-01

    Reinforcement learning tasks are often used to assess participants' tendency to learn more from the positive or more from the negative consequences of one's action. However, this assessment often requires comparison in learning performance across different task conditions, which may differ in the relative salience or discriminability of the stimuli associated with more and less rewarding outcomes, respectively. To address this issue, in a first set of studies, participants were subjected to two versions of a common probabilistic learning task. The two versions differed with respect to the stimulus (Hiragana) characters associated with reward probability. The assignment of character to reward probability was fixed within version but reversed between versions. We found that performance was highly influenced by task version, which could be explained by the relative perceptual discriminability of characters assigned to high or low reward probabilities, as assessed by a separate discrimination experiment. Participants were more reliable in selecting rewarding characters that were more discriminable, leading to differences in learning curves and their sensitivity to reward probability. This difference in experienced reinforcement history was accompanied by performance biases in a test phase assessing ability to learn from positive vs. negative outcomes. In a subsequent large-scale web-based experiment, this impact of task version on learning and test measures was replicated and extended. Collectively, these findings imply a key role for perceptual factors in guiding reward learning and underscore the need to control stimulus discriminability when making inferences about individual differences in reinforcement learning.

  14. Précis of bayesian rationality: The probabilistic approach to human reasoning.

    PubMed

    Oaksford, Mike; Chater, Nick

    2009-02-01

    According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic--the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. Bayesian Rationality argues that rationality is defined instead by the ability to reason about uncertainty. Although people are typically poor at numerical reasoning about probability, human thought is sensitive to subtle patterns of qualitative Bayesian, probabilistic reasoning. In Chapters 1-4 of Bayesian Rationality (Oaksford & Chater 2007), the case is made that cognition in general, and human everyday reasoning in particular, is best viewed as solving probabilistic, rather than logical, inference problems. In Chapters 5-7 the psychology of "deductive" reasoning is tackled head-on: It is argued that purportedly "logical" reasoning problems, revealing apparently irrational behaviour, are better understood from a probabilistic point of view. Data from conditional reasoning, Wason's selection task, and syllogistic inference are captured by recasting these problems probabilistically. The probabilistic approach makes a variety of novel predictions which have been experimentally confirmed. The book considers the implications of this work, and the wider "probabilistic turn" in cognitive science and artificial intelligence, for understanding human rationality.

  15. Probabilistic Cue Combination: Less Is More

    ERIC Educational Resources Information Center

    Yurovsky, Daniel; Boyer, Ty W.; Smith, Linda B.; Yu, Chen

    2013-01-01

    Learning about the structure of the world requires learning probabilistic relationships: rules in which cues do not predict outcomes with certainty. However, in some cases, the ability to track probabilistic relationships is a handicap, leading adults to perform non-normatively in prediction tasks. For example, in the "dilution effect,"…

  16. Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.

  17. Beta-adrenoreceptor blockade abolishes atomoxetine-induced risk taking.

    PubMed

    Yang, Fan Nils; Pan, Jing Samantha; Li, Xinwang

    2016-01-01

    Clinical studies have shown that patients with exaggerated risk-taking tendencies have high baseline levels of norepinephrine. In this work, we systemically manipulated norepinephrine levels in rats and studied their behavioral changes in a probabilistic discounting task, which is a paradigm for gauging risk taking. This study aims to explore the effects of the selective norepinephrine reuptake inhibitor (atomoxetine at doses of 0.6, 1.0 and 1.8 mg/kg), and receptor selective antagonists (propranolol at a single dose of 1.0/kg, and prazosin at a single dose of 0.1 mg/kg), on risk taking using a probabilistic discounting task. In this task, there were two levers available to rats: pressing the 'small/certain' lever guaranteed a single food pellet, and pressing the 'large/risky' lever yielded either four pellets or none. The probability of receiving four food pellets decreased across the four experimental blocks from 100% to 12.5%. Atomoxetine increased the tendency to choose the large/risky lever. It significantly reduced the lose-shift effect (i.e. pressing a different lever after losing a trial), but did not affect the win-stay effect (i.e. pressing the same lever after winning a trial). Furthermore, co-administration of beta-adrenoreceptor antagonist, propranolol, eliminated the effects of atomoxetine on risk taking and the lose-shift effect; but co-administration of alpha1-adrenoreceptor antagonist, prazosin, did not. Atomoxetine boosted NE levels and increased risk taking. This was because atomoxetine decreased rats' sensitivity to losses. These effects were likely mediated by beta-adrenoreceptor. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Probabilistic Low-Rank Multitask Learning.

    PubMed

    Kong, Yu; Shao, Ming; Li, Kang; Fu, Yun

    2018-03-01

    In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task. We derive and perform inference via variational Bayesian methods. Experimental results on both regression and classification tasks on real-world applications demonstrate the effectiveness of the proposed method in dealing with the MTL problems.

  19. Judging Words by Their Covers and the Company They Keep: Probabilistic Cues Support Word Learning

    ERIC Educational Resources Information Center

    Lany, Jill

    2014-01-01

    Statistical learning may be central to lexical and grammatical development. The phonological and distributional properties of words provide probabilistic cues to their grammatical and semantic properties. Infants can capitalize on such probabilistic cues to learn grammatical patterns in listening tasks. However, infants often struggle to learn…

  20. Human performance across decision making, selective attention, and working memory tasks: Experimental data and computer simulations.

    PubMed

    Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S

    2018-04-01

    This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

  1. Relevance feedback for CBIR: a new approach based on probabilistic feature weighting with positive and negative examples.

    PubMed

    Kherfi, Mohammed Lamine; Ziou, Djemel

    2006-04-01

    In content-based image retrieval, understanding the user's needs is a challenging task that requires integrating him in the process of retrieval. Relevance feedback (RF) has proven to be an effective tool for taking the user's judgement into account. In this paper, we present a new RF framework based on a feature selection algorithm that nicely combines the advantages of a probabilistic formulation with those of using both the positive example (PE) and the negative example (NE). Through interaction with the user, our algorithm learns the importance he assigns to image features, and then applies the results obtained to define similarity measures that correspond better to his judgement. The use of the NE allows images undesired by the user to be discarded, thereby improving retrieval accuracy. As for the probabilistic formulation of the problem, it presents a multitude of advantages and opens the door to more modeling possibilities that achieve a good feature selection. It makes it possible to cluster the query data into classes, choose the probability law that best models each class, model missing data, and support queries with multiple PE and/or NE classes. The basic principle of our algorithm is to assign more importance to features with a high likelihood and those which distinguish well between PE classes and NE classes. The proposed algorithm was validated separately and in image retrieval context, and the experiments show that it performs a good feature selection and contributes to improving retrieval effectiveness.

  2. How Attention Can Create Synaptic Tags for the Learning of Working Memories in Sequential Tasks

    PubMed Central

    Rombouts, Jaldert O.; Bohte, Sander M.; Roelfsema, Pieter R.

    2015-01-01

    Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions. PMID:25742003

  3. Relationship between Adolescent Risk Preferences on a Laboratory Task and Behavioral Measures of Risk-taking

    PubMed Central

    Rao, Uma; Sidhartha, Tanuj; Harker, Karen R.; Bidesi, Anup S.; Chen, Li-Ann; Ernst, Monique

    2010-01-01

    Purpose The goal of the study was to assess individual differences in risk-taking behavior among adolescents in the laboratory. A second aim was to evaluate whether the laboratory-based risk-taking behavior is associated with other behavioral and psychological measures associated with risk-taking behavior. Methods Eighty-two adolescents with no personal history of psychiatric disorder completed a computerized decision-making task, the Wheel of Fortune (WOF). By offering choices between clearly defined probabilities and real monetary outcomes, this task assesses risk preferences when participants are confronted with potential rewards and losses. The participants also completed a variety of behavioral and psychological measures associated with risk-taking behavior. Results Performance on the task varied based on the probability and anticipated outcomes. In the winning sub-task, participants selected low probability-high magnitude reward (high-risk choice) less frequently than high probability-low magnitude reward (low-risk choice). In the losing sub-task, participants selected low probability-high magnitude loss more often than high probability-low magnitude loss. On average, the selection of probabilistic rewards was optimal and similar to performance in adults. There were, however, individual differences in performance, and one-third of the adolescents made high-risk choice more frequently than low-risk choice while selecting a reward. After controlling for sociodemographic and psychological variables, high-risk choice on the winning task predicted “real-world” risk-taking behavior and substance-related problems. Conclusions These findings highlight individual differences in risk-taking behavior. Preliminary data on face validity of the WOF task suggest that it might be a valuable laboratory tool for studying behavioral and neurobiological processes associated with risk-taking behavior in adolescents. PMID:21257113

  4. Spatially aggregated multiclass pattern classification in functional MRI using optimally selected functional brain areas.

    PubMed

    Zheng, Weili; Ackley, Elena S; Martínez-Ramón, Manel; Posse, Stefan

    2013-02-01

    In previous works, boosting aggregation of classifier outputs from discrete brain areas has been demonstrated to reduce dimensionality and improve the robustness and accuracy of functional magnetic resonance imaging (fMRI) classification. However, dimensionality reduction and classification of mixed activation patterns of multiple classes remain challenging. In the present study, the goals were (a) to reduce dimensionality by combining feature reduction at the voxel level and backward elimination of optimally aggregated classifiers at the region level, (b) to compare region selection for spatially aggregated classification using boosting and partial least squares regression methods and (c) to resolve mixed activation patterns using probabilistic prediction of individual tasks. Brain activation maps from interleaved visual, motor, auditory and cognitive tasks were segmented into 144 functional regions. Feature selection reduced the number of feature voxels by more than 50%, leaving 95 regions. The two aggregation approaches further reduced the number of regions to 30, resulting in more than 75% reduction of classification time and misclassification rates of less than 3%. Boosting and partial least squares (PLS) were compared to select the most discriminative and the most task correlated regions, respectively. Successful task prediction in mixed activation patterns was feasible within the first block of task activation in real-time fMRI experiments. This methodology is suitable for sparsifying activation patterns in real-time fMRI and for neurofeedback from distributed networks of brain activation. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Overview of the SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division Activities and Technical Projects

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.

  6. Distinct roles of dopamine and subthalamic nucleus in learning and probabilistic decision making.

    PubMed

    Coulthard, Elizabeth J; Bogacz, Rafal; Javed, Shazia; Mooney, Lucy K; Murphy, Gillian; Keeley, Sophie; Whone, Alan L

    2012-12-01

    Even simple behaviour requires us to make decisions based on combining multiple pieces of learned and new information. Making such decisions requires both learning the optimal response to each given stimulus as well as combining probabilistic information from multiple stimuli before selecting a response. Computational theories of decision making predict that learning individual stimulus-response associations and rapid combination of information from multiple stimuli are dependent on different components of basal ganglia circuitry. In particular, learning and retention of memory, required for optimal response choice, are significantly reliant on dopamine, whereas integrating information probabilistically is critically dependent upon functioning of the glutamatergic subthalamic nucleus (computing the 'normalization term' in Bayes' theorem). Here, we test these theories by investigating 22 patients with Parkinson's disease either treated with deep brain stimulation to the subthalamic nucleus and dopaminergic therapy or managed with dopaminergic therapy alone. We use computerized tasks that probe three cognitive functions-information acquisition (learning), memory over a delay and information integration when multiple pieces of sequentially presented information have to be combined. Patients performed the tasks ON or OFF deep brain stimulation and/or ON or OFF dopaminergic therapy. Consistent with the computational theories, we show that stopping dopaminergic therapy impairs memory for probabilistic information over a delay, whereas deep brain stimulation to the region of the subthalamic nucleus disrupts decision making when multiple pieces of acquired information must be combined. Furthermore, we found that when participants needed to update their decision on the basis of the last piece of information presented in the decision-making task, patients with deep brain stimulation of the subthalamic nucleus region did not slow down appropriately to revise their plan, a pattern of behaviour that mirrors the impulsivity described clinically in some patients with subthalamic nucleus deep brain stimulation. Thus, we demonstrate distinct mechanisms for two important facets of human decision making: first, a role for dopamine in memory consolidation, and second, the critical importance of the subthalamic nucleus in successful decision making when multiple pieces of information must be combined.

  7. Performance on a probabilistic inference task in healthy subjects receiving ketamine compared with patients with schizophrenia

    PubMed Central

    Almahdi, Basil; Sultan, Pervez; Sohanpal, Imrat; Brandner, Brigitta; Collier, Tracey; Shergill, Sukhi S; Cregg, Roman; Averbeck, Bruno B

    2012-01-01

    Evidence suggests that some aspects of schizophrenia can be induced in healthy volunteers through acute administration of the non-competitive NMDA-receptor antagonist, ketamine. In probabilistic inference tasks, patients with schizophrenia have been shown to ‘jump to conclusions’ (JTC) when asked to make a decision. We aimed to test whether healthy participants receiving ketamine would adopt a JTC response pattern resembling that of patients. The paradigmatic task used to investigate JTC has been the ‘urn’ task, where participants are shown a sequence of beads drawn from one of two ‘urns’, each containing coloured beads in different proportions. Participants make a decision when they think they know the urn from which beads are being drawn. We compared performance on the urn task between controls receiving acute ketamine or placebo with that of patients with schizophrenia and another group of controls matched to the patient group. Patients were shown to exhibit a JTC response pattern relative to their matched controls, whereas JTC was not evident in controls receiving ketamine relative to placebo. Ketamine does not appear to promote JTC in healthy controls, suggesting that ketamine does not affect probabilistic inferences. PMID:22389244

  8. Probabilistic learning and inference in schizophrenia

    PubMed Central

    Averbeck, Bruno B.; Evans, Simon; Chouhan, Viraj; Bristow, Eleanor; Shergill, Sukhwinder S.

    2010-01-01

    Patients with schizophrenia make decisions on the basis of less evidence when required to collect information to make an inference, a behavior often called jumping to conclusions. The underlying basis for this behaviour remains controversial. We examined the cognitive processes underpinning this finding by testing subjects on the beads task, which has been used previously to elicit jumping to conclusions behaviour, and a stochastic sequence learning task, with a similar decision theoretic structure. During the sequence learning task, subjects had to learn a sequence of button presses, while receiving noisy feedback on their choices. We fit a Bayesian decision making model to the sequence task and compared model parameters to the choice behavior in the beads task in both patients and healthy subjects. We found that patients did show a jumping to conclusions style; and those who picked early in the beads task tended to learn less from positive feedback in the sequence task. This favours the likelihood of patients selecting early because they have a low threshold for making decisions, and that they make choices on the basis of relatively little evidence. PMID:20810252

  9. Probabilistic learning and inference in schizophrenia.

    PubMed

    Averbeck, Bruno B; Evans, Simon; Chouhan, Viraj; Bristow, Eleanor; Shergill, Sukhwinder S

    2011-04-01

    Patients with schizophrenia make decisions on the basis of less evidence when required to collect information to make an inference, a behavior often called jumping to conclusions. The underlying basis for this behavior remains controversial. We examined the cognitive processes underpinning this finding by testing subjects on the beads task, which has been used previously to elicit jumping to conclusions behavior, and a stochastic sequence learning task, with a similar decision theoretic structure. During the sequence learning task, subjects had to learn a sequence of button presses, while receiving a noisy feedback on their choices. We fit a Bayesian decision making model to the sequence task and compared model parameters to the choice behavior in the beads task in both patients and healthy subjects. We found that patients did show a jumping to conclusions style; and those who picked early in the beads task tended to learn less from positive feedback in the sequence task. This favours the likelihood of patients selecting early because they have a low threshold for making decisions, and that they make choices on the basis of relatively little evidence. Published by Elsevier B.V.

  10. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. How do People Solve the “Weather Prediction” Task?: Individual Variability in Strategies for Probabilistic Category Learning

    PubMed Central

    Gluck, Mark A.; Shohamy, Daphna; Myers, Catherine

    2002-01-01

    Probabilistic category learning is often assumed to be an incrementally learned cognitive skill, dependent on nondeclarative memory systems. One paradigm in particular, the weather prediction task, has been used in over half a dozen neuropsychological and neuroimaging studies to date. Because of the growing interest in using this task and others like it as behavioral tools for studying the cognitive neuroscience of cognitive skill learning, it becomes especially important to understand how subjects solve this kind of task and whether all subjects learn it in the same way. We present here new experimental and theoretical analyses of the weather prediction task that indicate that there are at least three different strategies that describe how subjects learn this task. (1) An optimal multi-cue strategy, in which they respond to each pattern on the basis of associations of all four cues with each outcome; (2) a one-cue strategy, in which they respond on the basis of presence or absence of a single cue, disregarding all other cues; or (3) a singleton strategy, in which they learn only about the four patterns that have only one cue present and all others absent. This variability in how subjects approach this task may have important implications for interpreting how different brain regions are involved in probabilistic category learning. PMID:12464701

  12. Specifying the brain anatomy underlying temporo-parietal junction activations for theory of mind: A review using probabilistic atlases from different imaging modalities.

    PubMed

    Schurz, Matthias; Tholen, Matthias G; Perner, Josef; Mars, Rogier B; Sallet, Jerome

    2017-09-01

    In this quantitative review, we specified the anatomical basis of brain activity reported in the Temporo-Parietal Junction (TPJ) in Theory of Mind (ToM) research. Using probabilistic brain atlases, we labeled TPJ peak coordinates reported in the literature. This was carried out for four different atlas modalities: (i) gyral-parcellation, (ii) sulco-gyral parcellation, (iii) cytoarchitectonic parcellation and (iv) connectivity-based parcellation. In addition, our review distinguished between two ToM task types (false belief and social animations) and a nonsocial task (attention reorienting). We estimated the mean probabilities of activation for each atlas label, and found that for all three task types part of TPJ activations fell into the same areas: (i) Angular Gyrus (AG) and Lateral Occpital Cortex (LOC) in terms of a gyral atlas, (ii) AG and Superior Temporal Sulcus (STS) in terms of a sulco-gyral atlas, (iii) areas PGa and PGp in terms of cytoarchitecture and (iv) area TPJp in terms of a connectivity-based parcellation atlas. Beside these commonalities, we also found that individual task types showed preferential activation for particular labels. Main findings for the right hemisphere were preferential activation for false belief tasks in AG/PGa, and in Supramarginal Gyrus (SMG)/PFm for attention reorienting. Social animations showed strongest selective activation in the left hemisphere, specifically in left Middle Temporal Gyrus (MTG). We discuss how our results (i.e., identified atlas structures) can provide a new reference for describing future findings, with the aim to integrate different labels and terminologies used for studying brain activity around the TPJ. Hum Brain Mapp 38:4788-4805, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. A probabilistic framework to infer brain functional connectivity from anatomical connections.

    PubMed

    Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel

    2011-01-01

    We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.

  14. Role of ionotropic glutamate receptors in delay and probability discounting in the rat.

    PubMed

    Yates, Justin R; Batten, Seth R; Bardo, Michael T; Beckmann, Joshua S

    2015-04-01

    Discounting of delayed and probabilistic reinforcement is linked to increased drug use and pathological gambling. Understanding the neurobiology of discounting is important for designing treatments for these disorders. Glutamate is considered to be involved in addiction-like behaviors; however, the role of ionotropic glutamate receptors (iGluRs) in discounting remains unclear. The current study examined the effects of N-methyl-D-aspartate (NMDA) and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) glutamate receptor blockade on performance in delay and probability discounting tasks. Following training in either delay or probability discounting, rats (n = 12, each task) received pretreatments of the NMDA receptor antagonists MK-801 (0, 0.01, 0.03, 0.1, or 0.3 mg/kg, s.c.) or ketamine (0, 1.0, 5.0, or 10.0 mg/kg, i.p.), as well as the AMPA receptor antagonist CNQX (0, 1.0, 3.0, or 5.6 mg/kg, i.p.). Hyperbolic discounting functions were used to estimate sensitivity to delayed/probabilistic reinforcement and sensitivity to reinforcer amount. An intermediate dose of MK-801 (0.03 mg/kg) decreased sensitivity to both delayed and probabilistic reinforcement. In contrast, ketamine did not affect the rate of discounting in either task but decreased sensitivity to reinforcer amount. CNQX did not alter sensitivity to reinforcer amount or delayed/probabilistic reinforcement. These results show that blockade of NMDA receptors, but not AMPA receptors, decreases sensitivity to delayed/probabilistic reinforcement (MK-801) and sensitivity to reinforcer amount (ketamine). The differential effects of MK-801 and ketamine demonstrate that sensitivities to delayed/probabilistic reinforcement and reinforcer amount are pharmacologically dissociable.

  15. Feedback-Driven Trial-by-Trial Learning in Autism Spectrum Disorders

    PubMed Central

    Solomon, Marjorie; Frank, Michael J.; Ragland, J. Daniel; Smith, Anne C.; Niendam, Tara A.; Lesh, Tyler A.; Grayson, David S.; Beck, Jonathan S.; Matter, John C.; Carter, Cameron S.

    2017-01-01

    Objective Impairments in learning are central to autism spectrum disorders. The authors investigated the cognitive and neural basis of these deficits in young adults with autism spectrum disorders using a well-characterized probabilistic reinforcement learning paradigm. Method The probabilistic selection task was implemented among matched participants with autism spectrum disorders (N=22) and with typical development (N=25), aged 18–40 years, using rapid event-related functional MRI. Participants were trained to choose the correct stimulus in high-probability (AB), medium-probability (CD), and low-probability (EF) pairs, presented with valid feedback 80%, 70%, and 60% of the time, respectively. Whole-brain voxel-wise and parametric modulator analyses examined early and late learning during the stimulus and feedback epochs of the task. Results The groups exhibited comparable performance on medium- and low-probability pairs. Typically developing persons showed higher accuracy on the high-probability pair, better win-stay performance (selection of the previously rewarded stimulus on the next trial of that type), and more robust recruitment of the anterior and medial prefrontal cortex during the stimulus epoch, suggesting development of an intact reward-based working memory for recent stimulus values. Throughout the feedback epoch, individuals with autism spectrum disorders exhibited greater recruitment of the anterior cingulate and orbito-frontal cortices compared with individuals with typical development, indicating continuing trial-by-trial activity related to feedback processing. Conclusions Individuals with autism spectrum disorders exhibit learning deficits reflecting impaired ability to develop an effective reward-based working memory to guide stimulus selection. Instead, they continue to rely on trial-by-trial feedback processing to support learning dependent upon engagement of the anterior cingulate and orbito-frontal cortices. PMID:25158242

  16. Adult Age Differences in Learning from Positive and Negative Probabilistic Feedback

    PubMed Central

    Simon, Jessica R.; Howard, James H.; Howard, Darlene V.

    2010-01-01

    Objective Past research has investigated age differences in frontal-based decision making, but few studies have focused on the behavioral effects of striatal-based changes in healthy aging. Feedback learning has been found to vary with dopamine levels; increases in dopamine facilitate learning from positive feedback, whereas decreases facilitate learning from negative feedback. Given previous evidence of striatal dopamine depletion in healthy aging, we investigated behavioral differences between college-aged and healthy old adults using a feedback learning task that is sensitive to both frontal and striatal processes. Method Seventeen college-aged (M = 18.9 years) and 24 healthy, older adults (M = 70.3 years) completed the Probabilistic selection task, in which participants are trained on probabilistic stimulus-outcome information and then tested to determine whether they learned more from positive or negative feedback. Results As a group, the old adults learned equally well from positive and negative feedback, whereas the college-aged group learned more from positive than negative feedback, F(1, 39) = 4.10, p < .05, reffect = .3. However, these group differences were not due to the older individuals being more balanced learners. Most individuals of both ages were balanced learners, but while all of the remaining young learners had a positive bias, the remaining older learners were split between those with positive and negative learning biases (χ2(2) = 6.12, p<.047). Conclusions These behavioral results are consistent with the dopamine theory of striatal aging, and suggest there might be adult age differences in the kinds of information people use when faced with a current choice. PMID:20604627

  17. Probabilistic Reversal Learning in Schizophrenia: Stability of Deficits and Potential Causal Mechanisms.

    PubMed

    Reddy, Lena Felice; Waltz, James A; Green, Michael F; Wynn, Jonathan K; Horan, William P

    2016-07-01

    Although individuals with schizophrenia show impaired feedback-driven learning on probabilistic reversal learning (PRL) tasks, the specific factors that contribute to these deficits remain unknown. Recent work has suggested several potential causes including neurocognitive impairments, clinical symptoms, and specific types of feedback-related errors. To examine this issue, we administered a PRL task to 126 stable schizophrenia outpatients and 72 matched controls, and patients were retested 4 weeks later. The task involved an initial probabilistic discrimination learning phase and subsequent reversal phases in which subjects had to adjust their responses to sudden shifts in the reinforcement contingencies. Patients showed poorer performance than controls for both the initial discrimination and reversal learning phases of the task, and performance overall showed good test-retest reliability among patients. A subgroup analysis of patients (n = 64) and controls (n = 49) with good initial discrimination learning revealed no between-group differences in reversal learning, indicating that the patients who were able to achieve all of the initial probabilistic discriminations were not impaired in reversal learning. Regarding potential contributors to impaired discrimination learning, several factors were associated with poor PRL, including higher levels of neurocognitive impairment, poor learning from both positive and negative feedback, and higher levels of indiscriminate response shifting. The results suggest that poor PRL performance in schizophrenia can be the product of multiple mechanisms. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  18. Feedback-related negativity is enhanced in adolescence during a gambling task with and without probabilistic reinforcement learning.

    PubMed

    Martínez-Velázquez, Eduardo S; Ramos-Loyo, Julieta; González-Garrido, Andrés A; Sequeira, Henrique

    2015-01-21

    Feedback-related negativity (FRN) is a negative deflection that appears around 250 ms after the gain or loss of feedback to chosen alternatives in a gambling task in frontocentral regions following outcomes. Few studies have reported FRN enhancement in adolescents compared with adults in a gambling task without probabilistic reinforcement learning, despite the fact that learning from positive or negative consequences is crucial for decision-making during adolescence. Therefore, the aim of the present research was to identify differences in FRN amplitude and latency between adolescents and adults on a gambling task with favorable and unfavorable probabilistic reinforcement learning conditions, in addition to a nonlearning condition with monetary gains and losses. Higher rate scores of high-magnitude choices during the final 30 trials compared with the first 30 trials were observed during the favorable condition, whereas lower rates were observed during the unfavorable condition in both groups. Higher FRN amplitude in all conditions and longer latency in the nonlearning condition were observed in adolescents compared with adults and in relation to losses. Results indicate that both the adolescents and the adults improved their performance in relation to positive and negative feedback. However, the FRN findings suggest an increased sensitivity to external feedback to losses in adolescents compared with adults, irrespective of the presence or absence of probabilistic reinforcement learning. These results reflect processing differences on the neural monitoring system and provide new perspectives on the dynamic development of an adolescent's brain.

  19. The role of effective connectivity between the task-positive and task-negative network for evidence gathering [Evidence gathering and connectivity].

    PubMed

    Andreou, Christina; Steinmann, Saskia; Kolbeck, Katharina; Rauh, Jonas; Leicht, Gregor; Moritz, Steffen; Mulert, Christoph

    2018-06-01

    Reports linking a 'jumping-to-conclusions' bias to delusions have led to growing interest in the neurobiological correlates of probabilistic reasoning. Several brain areas have been implicated in probabilistic reasoning; however, findings are difficult to integrate into a coherent account. The present study aimed to provide additional evidence by investigating, for the first time, effective connectivity among brain areas involved in different stages of evidence gathering. We investigated evidence gathering in 25 healthy individuals using fMRI and a new paradigm (Box Task) designed such as to minimize the effects of cognitive effort and reward processing. Decisions to collect more evidence ('draws') were contrasted to decisions to reach a final choice ('conclusions') with respect to BOLD activity. Psychophysiological interaction analysis was used to investigate effective connectivity. Conclusion events were associated with extensive brain activations in widely distributed brain areas associated with the task-positive network. In contrast, draw events were characterized by higher activation in areas assumed to be part of the task-negative network. Effective connectivity between the two networks decreased during draws and increased during conclusion events. Our findings indicate that probabilistic reasoning may depend on the balance between the task-positive and task-negative network, and that shifts in connectivity between the two may be crucial for evidence gathering. Thus, abnormal connectivity between the two systems may significantly contribute to the jumping-to-conclusions bias. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Real-time value-driven diagnosis

    NASA Technical Reports Server (NTRS)

    Dambrosio, Bruce

    1995-01-01

    Diagnosis is often thought of as an isolated task in theoretical reasoning (reasoning with the goal of updating our beliefs about the world). We present a decision-theoretic interpretation of diagnosis as a task in practical reasoning (reasoning with the goal of acting in the world), and sketch components of our approach to this task. These components include an abstract problem description, a decision-theoretic model of the basic task, a set of inference methods suitable for evaluating the decision representation in real-time, and a control architecture to provide the needed continuing coordination between the agent and its environment. A principal contribution of this work is the representation and inference methods we have developed, which extend previously available probabilistic inference methods and narrow, somewhat, the gap between probabilistic and logical models of diagnosis.

  1. Probability in reasoning: a developmental test on conditionals.

    PubMed

    Barrouillet, Pierre; Gauffroy, Caroline

    2015-04-01

    Probabilistic theories have been claimed to constitute a new paradigm for the psychology of reasoning. A key assumption of these theories is captured by what they call the Equation, the hypothesis that the meaning of the conditional is probabilistic in nature and that the probability of If p then q is the conditional probability, in such a way that P(if p then q)=P(q|p). Using the probabilistic truth-table task in which participants are required to evaluate the probability of If p then q sentences, the present study explored the pervasiveness of the Equation through ages (from early adolescence to adulthood), types of conditionals (basic, causal, and inducements) and contents. The results reveal that the Equation is a late developmental achievement only endorsed by a narrow majority of educated adults for certain types of conditionals depending on the content they involve. Age-related changes in evaluating the probability of all the conditionals studied closely mirror the development of truth-value judgements observed in previous studies with traditional truth-table tasks. We argue that our modified mental model theory can account for this development, and hence for the findings related with the probability task, which do not consequently support the probabilistic approach of human reasoning over alternative theories. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. REM Sleep Enhancement of Probabilistic Classification Learning is Sensitive to Subsequent Interference

    PubMed Central

    Barsky, Murray M.; Tucker, Matthew A.; Stickgold, Robert

    2015-01-01

    During wakefulness the brain creates meaningful relationships between disparate stimuli in ways that escape conscious awareness. Processes active during sleep can strengthen these relationships, leading to more adaptive use of those stimuli when encountered during subsequent wake. Performance on the weather prediction task (WPT), a well-studied measure of implicit probabilistic learning, has been shown to improve significantly following a night of sleep, with stronger initial learning predicting more nocturnal REM sleep. We investigated this relationship further, studying the effect on WPT performance of a daytime nap containing REM sleep. We also added an interference condition after the nap/wake period as an additional probe of memory strength. Our results show that a nap significantly boosts WPT performance, and that this improvement is correlated with the amount of REM sleep obtained during the nap. When interference training is introduced following the nap, however, this REM-sleep benefit vanishes. In contrast, following an equal period of wake, performance is both unchanged from training and unaffected by interference training. Thus, while the true probabilistic relationships between WPT stimuli are strengthened by sleep, these changes are selectively susceptible to the destructive effects of retroactive interference, at least in the short term. PMID:25769506

  3. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1986-01-01

    A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.

  4. A mediation model to explain decision making under conditions of risk among adolescents: the role of fluid intelligence and probabilistic reasoning.

    PubMed

    Donati, Maria Anna; Panno, Angelo; Chiesi, Francesca; Primi, Caterina

    2014-01-01

    This study tested the mediating role of probabilistic reasoning ability in the relationship between fluid intelligence and advantageous decision making among adolescents in explicit situations of risk--that is, in contexts in which information on the choice options (gains, losses, and probabilities) were explicitly presented at the beginning of the task. Participants were 282 adolescents attending high school (77% males, mean age = 17.3 years). We first measured fluid intelligence and probabilistic reasoning ability. Then, to measure decision making under explicit conditions of risk, participants performed the Game of Dice Task, in which they have to decide among different alternatives that are explicitly linked to a specific amount of gain or loss and have obvious winning probabilities that are stable over time. Analyses showed a significant positive indirect effect of fluid intelligence on advantageous decision making through probabilistic reasoning ability that acted as a mediator. Specifically, fluid intelligence may enhance ability to reason in probabilistic terms, which in turn increases the likelihood of advantageous choices when adolescents are confronted with an explicit decisional context. Findings show that in experimental paradigm settings, adolescents are able to make advantageous decisions using cognitive abilities when faced with decisions under explicit risky conditions. This study suggests that interventions designed to promote probabilistic reasoning, for example by incrementing the mathematical prerequisites necessary to reason in probabilistic terms, may have a positive effect on adolescents' decision-making abilities.

  5. Improving ontology matching with propagation strategy and user feedback

    NASA Astrophysics Data System (ADS)

    Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu

    2015-07-01

    Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.

  6. Jumping to conclusions and the continuum of delusional beliefs.

    PubMed

    Warman, Debbie M; Lysaker, Paul H; Martin, Joel M; Davis, Louanne; Haudenschield, Samantha L

    2007-06-01

    The present study examined the jumping to conclusions reasoning bias across the continuum of delusional ideation by investigating individuals with active delusions, delusion prone individuals, and non-delusion prone individuals. Neutral and highly self-referent probabilistic reasoning tasks were employed. Results indicated that individuals with delusions gathered significantly less information than delusion prone and non-delusion prone participants on both the neutral and self-referent tasks, (p<.001). Individuals with delusions made less accurate decisions than the delusion prone and non-delusion prone participants on both tasks (p<.001), yet were more confident about their decisions than were delusion prone and non-delusion prone participants on the self-referent task (p=.002). Those with delusions and those who were delusion prone reported higher confidence in their performance on the self-referent task than they did the neutral task (p=.02), indicating that high self-reference impacted information processing for individuals in both of these groups. The results are discussed in relation to previous research in the area of probabilistic reasoning and delusions.

  7. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing.

    PubMed

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models.

  8. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing

    PubMed Central

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models. PMID:29062288

  9. Isolation Rearing Effects on Probabilistic Learning and Cognitive Flexibility in Rats

    PubMed Central

    AMITAI, Nurith; YOUNG, Jared W.; HIGA, Kerin; SHARP, Richard F.; GEYER, Mark A.; POWELL, Susan B.

    2013-01-01

    Isolation rearing is a neurodevelopmental manipulation that produces neurochemical, structural, and behavioral alterations in rodents that have consistencies with schizophrenia. Symptoms induced by isolation rearing that mirror clinically relevant aspects of schizophrenia, such as cognitive deficits, open up the possibility of testing putative therapeutics in isolation-reared animals prior to clinical development. We investigated what effect isolation rearing would have on cognitive flexibility, a cognitive function characteristically disrupted in schizophrenia. For this purpose, we assessed cognitive flexibility using between- and within-session probabilistic reversal-learning tasks based on clinical tests. Isolation-reared rats required more sessions, though not more task trials, to acquire criterion performance in the reversal phase of the task and were slower to adjust their task strategy after reward contingencies were switched. Isolation-reared rats also completed fewer trials and exhibited lower levels of overall activity in the probabilistic reversal-learning task compared to socially reared rats. This finding contrasted with the elevated levels of unconditioned investigatory activity and reduced levels of locomotor habituation that isolation-reared rats displayed in the behavioral pattern monitor. Finally, isolation-reared rats also exhibited sensorimotor gating deficits, reflected by decreased prepulse inhibition of the startle response, consistent with previous studies. We conclude that isolation rearing constitutes a valuable, noninvasive manipulation for modeling schizophrenia-like cognitive deficits and assessing putative therapeutics. PMID:23943516

  10. Displaying uncertainty: investigating the effects of display format and specificity.

    PubMed

    Bisantz, Ann M; Marsiglio, Stephanie Schinzing; Munch, Jessica

    2005-01-01

    We conducted four studies regarding the representation of probabilistic information. Experiments 1 through 3 compared performance on a simulated stock purchase task, in which information regarding stock profitability was probabilistic. Two variables were manipulated: display format for probabilistic information (blurred and colored icons, linguistic phrases, numeric expressions, and combinations) and specificity level (in which the number and size of discrete steps into which the probabilistic information was mapped differed). Results indicated few performance differences attributable to display format; however, performance did improve with greater specificity. Experiment 4, in which participants generated membership functions corresponding to three display formats, found a high degree of similarity in functions across formats and participants and a strong relationship between the shape of the membership function and the intended meaning of the representation. These results indicate that participants can successfully interpret nonnumeric representations of uncertainty and can use such representations in a manner similar to the way numeric expressions are used in a decision-making task. Actual or potential applications of this research include the use of graphical representations of uncertainty in systems such as command and control and situation displays.

  11. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 3: Literature surveys and technical reports

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The technical effort and computer code developed during the first year are summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis.

  12. Assessing and Increasing Staff Preference for Job Tasks Using Concurrent-Chains Schedules and Probabilistic Outcomes

    ERIC Educational Resources Information Center

    Reed, Derek D.; DiGennaro Reed, Florence D.; Campisano, Natalie; Lacourse, Kristen; Azulay, Richard L.

    2012-01-01

    The assessment and improvement of staff members' subjective valuation of nonpreferred work tasks may be one way to increase the quality of staff members' work life. The Task Enjoyment Motivation Protocol (Green, Reid, Passante, & Canipe, 2008) provides a process for supervisors to identify the aversive qualities of nonpreferred job tasks.…

  13. A transdiagnostic investigation of 'theory of mind' and 'jumping to conclusions' in patients with persecutory delusions.

    PubMed

    Corcoran, R; Rowse, G; Moore, R; Blackwood, N; Kinderman, P; Howard, R; Cummins, S; Bentall, R P

    2008-11-01

    A tendency to make hasty decisions on probabilistic reasoning tasks and a difficulty attributing mental states to others are key cognitive features of persecutory delusions (PDs) in the context of schizophrenia. This study examines whether these same psychological anomalies characterize PDs when they present in the context of psychotic depression. Performance on measures of probabilistic reasoning and theory of mind (ToM) was examined in five subgroups differing in diagnostic category and current illness status. The tendency to draw hasty decisions in probabilistic settings and poor ToM tested using story format feature in PDs irrespective of diagnosis. Furthermore, performance on the ToM story task correlated with the degree of distress caused by and preoccupation with the current PDs in the currently deluded groups. By contrast, performance on the non-verbal ToM task appears to be more sensitive to diagnosis, as patients with schizophrenia spectrum disorders perform worse on this task than those with depression irrespective of the presence of PDs. The psychological anomalies associated with PDs examined here are transdiagnostic but different measures of ToM may be more or less sensitive to indices of severity of the PDs, diagnosis and trait- or state-related cognitive effects.

  14. Selective attention increases choice certainty in human decision making.

    PubMed

    Zizlsperger, Leopold; Sauvigny, Thomas; Haarmeier, Thomas

    2012-01-01

    Choice certainty is a probabilistic estimate of past performance and expected outcome. In perceptual decisions the degree of confidence correlates closely with choice accuracy and reaction times, suggesting an intimate relationship to objective performance. Here we show that spatial and feature-based attention increase human subjects' certainty more than accuracy in visual motion discrimination tasks. Our findings demonstrate for the first time a dissociation of choice accuracy and certainty with a significantly stronger influence of voluntary top-down attention on subjective performance measures than on objective performance. These results reveal a so far unknown mechanism of the selection process implemented by attention and suggest a unique biological valence of choice certainty beyond a faithful reflection of the decision process.

  15. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    NASA Astrophysics Data System (ADS)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  16. Spatiotemporal movement planning and rapid adaptation for manual interaction.

    PubMed

    Huber, Markus; Kupferberg, Aleksandra; Lenz, Claus; Knoll, Alois; Brandt, Thomas; Glasauer, Stefan

    2013-01-01

    Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot's joint configuration. Modeling the task was based on the assumption that the receiver's decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.

  17. Development of probabilistic multimedia multipathway computer codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; LePoire, D.; Gnanapragasam, E.

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less

  18. Dominating Scale-Free Networks Using Generalized Probabilistic Methods

    PubMed Central

    Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.

    2014-01-01

    We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937

  19. A Probabilistic Strategy for Understanding Action Selection

    PubMed Central

    Kim, Byounghoon; Basso, Michele A.

    2010-01-01

    Brain regions involved in transforming sensory signals into movement commands are the likely sites where decisions are formed. Once formed, a decision must be read-out from the activity of populations of neurons to produce a choice of action. How this occurs remains unresolved. We recorded from four superior colliculus (SC) neurons simultaneously while monkeys performed a target selection task. We implemented three models to gain insight into the computational principles underlying population coding of action selection. We compared the population vector average (PVA), winner-takes-all (WTA) and a Bayesian model, maximum a posteriori estimate (MAP) to determine which predicted choices most often. The probabilistic model predicted more trials correctly than both the WTA and the PVA. The MAP model predicted 81.88% whereas WTA predicted 71.11% and PVA/OLE predicted the least number of trials at 55.71 and 69.47%. Recovering MAP estimates using simulated, non-uniform priors that correlated with monkeys’ choice performance, improved the accuracy of the model by 2.88%. A dynamic analysis revealed that the MAP estimate evolved over time and the posterior probability of the saccade choice reached a maximum at the time of the saccade. MAP estimates also scaled with choice performance accuracy. Although there was overlap in the prediction abilities of all the models, we conclude that movement choice from populations of neurons may be best understood by considering frameworks based on probability. PMID:20147560

  20. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  1. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  2. The dual reading of general conditionals: The influence of abstract versus concrete contexts.

    PubMed

    Wang, Moyun; Yao, Xinyun

    2018-04-01

    A current main issue on conditionals is whether the meaning of general conditionals (e.g., If a card is red, then it is round) is deterministic (exceptionless) or probabilistic (exception-tolerating). In order to resolve the issue, two experiments examined the influence of conditional contexts (with vs. without frequency information of truth table cases) on the reading of general conditionals. Experiment 1 examined the direct reading of general conditionals in the possibility judgment task. Experiment 2 examined the indirect reading of general conditionals in the truth judgment task. It was found that both the direct and indirect reading of general conditionals exhibited the duality: the predominant deterministic semantic reading of conditionals without frequency information, and the predominant probabilistic pragmatic reading of conditionals with frequency information. The context of general conditionals determined the predominant reading of general conditionals. There were obvious individual differences in reading general conditionals with frequency information. The meaning of general conditionals is relative, depending on conditional contexts. The reading of general conditionals is flexible and complex so that no simple deterministic and probabilistic accounts are able to explain it. The present findings are beyond the extant deterministic and probabilistic accounts of conditionals.

  3. Effects of delay and probability combinations on discounting in humans

    PubMed Central

    Cox, David J.; Dallery, Jesse

    2017-01-01

    To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n = 212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n = 98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. PMID:27498073

  4. Probabilistic reversal learning is impaired in Parkinson's disease

    PubMed Central

    Peterson, David A.; Elliott, Christian; Song, David D.; Makeig, Scott; Sejnowski, Terrence J.; Poizner, Howard

    2009-01-01

    In many everyday settings, the relationship between our choices and their potentially rewarding outcomes is probabilistic and dynamic. In addition, the difficulty of the choices can vary widely. Although a large body of theoretical and empirical evidence suggests that dopamine mediates rewarded learning, the influence of dopamine in probabilistic and dynamic rewarded learning remains unclear. We adapted a probabilistic rewarded learning task originally used to study firing rates of dopamine cells in primate substantia nigra pars compacta (Morris et al. 2006) for use as a reversal learning task with humans. We sought to investigate how the dopamine depletion in Parkinson's disease (PD) affects probabilistic reward learning and adaptation to a reversal in reward contingencies. Over the course of 256 trials subjects learned to choose the more favorable from among pairs of images with small or large differences in reward probabilities. During a subsequent otherwise identical reversal phase, the reward probability contingencies for the stimuli were reversed. Seventeen Parkinson's disease (PD) patients of mild to moderate severity were studied off of their dopaminergic medications and compared to 15 age-matched controls. Compared to controls, PD patients had distinct pre- and post-reversal deficiencies depending upon the difficulty of the choices they had to learn. The patients also exhibited compromised adaptability to the reversal. A computational model of the subjects’ trial-by-trial choices demonstrated that the adaptability was sensitive to the gain with which patients weighted pre-reversal feedback. Collectively, the results implicate the nigral dopaminergic system in learning to make choices in environments with probabilistic and dynamic reward contingencies. PMID:19628022

  5. Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention.

    PubMed

    Vossel, Simone; Bauer, Markus; Mathys, Christoph; Adams, Rick A; Dolan, Raymond J; Stephan, Klaas E; Friston, Karl J

    2014-11-19

    The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision. Copyright © 2014 the authors 0270-6474/14/3415735-08$15.00/0.

  6. Role of Central Serotonin in Anticipation of Rewarding and Punishing Outcomes: Effects of Selective Amygdala or Orbitofrontal 5-HT Depletion

    PubMed Central

    Rygula, Rafal; Clarke, Hannah F.; Cardinal, Rudolf N.; Cockcroft, Gemma J.; Xia, Jing; Dalley, Jeff W.; Robbins, Trevor W.; Roberts, Angela C.

    2015-01-01

    Understanding the role of serotonin (or 5-hydroxytryptamine, 5-HT) in aversive processing has been hampered by the contradictory findings, across studies, of increased sensitivity to punishment in terms of subsequent response choice but decreased sensitivity to punishment-induced response suppression following gross depletion of central 5-HT. To address this apparent discrepancy, the present study determined whether both effects could be found in the same animals by performing localized 5-HT depletions in the amygdala or orbitofrontal cortex (OFC) of a New World monkey, the common marmoset. 5-HT depletion in the amygdala impaired response choice on a probabilistic visual discrimination task by increasing the effectiveness of misleading, or false, punishment and reward, and decreased response suppression in a variable interval test of punishment sensitivity that employed the same reward and punisher. 5-HT depletion in the OFC also disrupted probabilistic discrimination learning and decreased response suppression. Computational modeling of behavior on the discrimination task showed that the lesions reduced reinforcement sensitivity. A novel, unitary account of the findings in terms of the causal role of 5-HT in the anticipation of both negative and positive motivational outcomes is proposed and discussed in relation to current theories of 5-HT function and our understanding of mood and anxiety disorders. PMID:24879752

  7. Beads task vs. box task: The specificity of the jumping to conclusions bias.

    PubMed

    Balzan, Ryan P; Ephraums, Rachel; Delfabbro, Paul; Andreou, Christina

    2017-09-01

    Previous research involving the probabilistic reasoning 'beads task' has consistently demonstrated a jumping-to-conclusions (JTC) bias, where individuals with delusions make decisions based on limited evidence. However, recent studies have suggested that miscomprehension may be confounding the beads task. The current study aimed to test the conventional beads task against a conceptually simpler probabilistic reasoning "box task" METHODS: One hundred non-clinical participants completed both the beads task and the box task, and the Peters et al. Delusions Inventory (PDI) to assess for delusion-proneness. The number of 'draws to decision' was assessed for both tasks. Additionally, the total amount of on-screen evidence was manipulated for the box task, and two new box task measures were assessed (i.e., 'proportion of evidence requested' and 'deviation from optimal solution'). Despite being conceptually similar, the two tasks did not correlate, and participants requested significantly less information on the beads task relative to the box task. High-delusion-prone participants did not demonstrate hastier decisions on either task; in fact, for box task, this group was observed to be significantly more conservative than low-delusion-prone group. Neither task was incentivized; results need replication with a clinical sample. Participants, and particularly those identified as high-delusion-prone, displayed a more conservative style of responding on the novel box task, relative to the beads task. The two tasks, whilst conceptually similar, appear to be tapping different cognitive processes. The implications of these results are discussed in relation to the JTC bias and the theoretical mechanisms thought to underlie it. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Information Processing at the Memoryful and Memoryless Channel Levels in Problem-Solving and Recall Tasks.

    ERIC Educational Resources Information Center

    Fazio, Frank; Moser, Gene W.

    A probabilistic model (see SE 013 578) describing information processing during the cognitive tasks of recall and problem solving was tested, refined, and developed by testing graduate students on a number of tasks which combined oral, written, and overt "input" and "output" modes in several ways. In a verbal chain one subject…

  9. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  10. IEA Wind Task 36 Forecasting

    NASA Astrophysics Data System (ADS)

    Giebel, Gregor; Cline, Joel; Frank, Helmut; Shaw, Will; Pinson, Pierre; Hodge, Bri-Mathias; Kariniotakis, Georges; Sempreviva, Anna Maria; Draxl, Caroline

    2017-04-01

    Wind power forecasts have been used operatively for over 20 years. Despite this fact, there are still several possibilities to improve the forecasts, both from the weather prediction side and from the usage of the forecasts. The new International Energy Agency (IEA) Task on Wind Power Forecasting tries to organise international collaboration, among national weather centres with an interest and/or large projects on wind forecast improvements (NOAA, DWD, UK MetOffice, …) and operational forecaster and forecast users. The Task is divided in three work packages: Firstly, a collaboration on the improvement of the scientific basis for the wind predictions themselves. This includes numerical weather prediction model physics, but also widely distributed information on accessible datasets for verification. Secondly, we will be aiming at an international pre-standard (an IEA Recommended Practice) on benchmarking and comparing wind power forecasts, including probabilistic forecasts aiming at industry and forecasters alike. This WP will also organise benchmarks, in cooperation with the IEA Task WakeBench. Thirdly, we will be engaging end users aiming at dissemination of the best practice in the usage of wind power predictions, especially probabilistic ones. The Operating Agent is Gregor Giebel of DTU, Co-Operating Agent is Joel Cline of the US Department of Energy. Collaboration in the task is solicited from everyone interested in the forecasting business. We will collaborate with IEA Task 31 Wakebench, which developed the Windbench benchmarking platform, which this task will use for forecasting benchmarks. The task runs for three years, 2016-2018. Main deliverables are an up-to-date list of current projects and main project results, including datasets which can be used by researchers around the world to improve their own models, an IEA Recommended Practice on performance evaluation of probabilistic forecasts, a position paper regarding the use of probabilistic forecasts, and one or more benchmark studies implemented on the Windbench platform hosted at CENER. Additionally, spreading of relevant information in both the forecasters and the users community is paramount. The poster also shows the work done in the first half of the Task, e.g. the collection of available datasets and the learnings from a public workshop on 9 June in Barcelona on Experiences with the Use of Forecasts and Gaps in Research. Participation is open for all interested parties in member states of the IEA Annex on Wind Power, see ieawind.org for the up-to-date list. For collaboration, please contact the author grgi@dtu.dk).

  11. Low Cognitive Impulsivity Is Associated with Better Gain and Loss Learning in a Probabilistic Decision-Making Task

    PubMed Central

    Cáceres, Pablo; San Martín, René

    2017-01-01

    Many advances have been made over the last decades in describing, on the one hand, the link between reward-based learning and decision-making, and on the other hand, the link between impulsivity and decision-making. However, the association between reward-based learning and impulsivity remains poorly understood. In this study, we evaluated the association between individual differences in loss-minimizing and gain-maximizing behavior in a learning-based probabilistic decision-making task and individual differences in cognitive impulsivity. We found that low cognitive impulsivity was associated both with a better performance minimizing losses and maximizing gains during the task. These associations remained significant after controlling for mathematical skills and gender as potential confounders. We discuss potential mechanisms through which cognitive impulsivity might interact with reward-based learning and decision-making. PMID:28261137

  12. Low Cognitive Impulsivity Is Associated with Better Gain and Loss Learning in a Probabilistic Decision-Making Task.

    PubMed

    Cáceres, Pablo; San Martín, René

    2017-01-01

    Many advances have been made over the last decades in describing, on the one hand, the link between reward-based learning and decision-making, and on the other hand, the link between impulsivity and decision-making. However, the association between reward-based learning and impulsivity remains poorly understood. In this study, we evaluated the association between individual differences in loss-minimizing and gain-maximizing behavior in a learning-based probabilistic decision-making task and individual differences in cognitive impulsivity. We found that low cognitive impulsivity was associated both with a better performance minimizing losses and maximizing gains during the task. These associations remained significant after controlling for mathematical skills and gender as potential confounders. We discuss potential mechanisms through which cognitive impulsivity might interact with reward-based learning and decision-making.

  13. PREDICT: Privacy and Security Enhancing Dynamic Information Monitoring

    DTIC Science & Technology

    2015-08-03

    consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided local...12], consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided...these methods achieve high sensing coverage with low cost using cloaked locations [3]. In follow-on work, the issue of mobility is addressed. Task

  14. Perceptual Decision-Making as Probabilistic Inference by Neural Sampling.

    PubMed

    Haefner, Ralf M; Berkes, Pietro; Fiser, József

    2016-05-04

    We address two main challenges facing systems neuroscience today: understanding the nature and function of cortical feedback between sensory areas and of correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make testable predictions for the influence of feedback signals on early sensory representations. Applying our framework to a two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task dependence of neural response correlations and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes new predictions and characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It demonstrates a normative way to integrate sensory and cognitive components into physiologically testable models of perceptual decision-making. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Optimism as a Prior Belief about the Probability of Future Reward

    PubMed Central

    Kalra, Aditi; Seriès, Peggy

    2014-01-01

    Optimists hold positive a priori beliefs about the future. In Bayesian statistical theory, a priori beliefs can be overcome by experience. However, optimistic beliefs can at times appear surprisingly resistant to evidence, suggesting that optimism might also influence how new information is selected and learned. Here, we use a novel Pavlovian conditioning task, embedded in a normative framework, to directly assess how trait optimism, as classically measured using self-report questionnaires, influences choices between visual targets, by learning about their association with reward progresses. We find that trait optimism relates to an a priori belief about the likelihood of rewards, but not losses, in our task. Critically, this positive belief behaves like a probabilistic prior, i.e. its influence reduces with increasing experience. Contrary to findings in the literature related to unrealistic optimism and self-beliefs, it does not appear to influence the iterative learning process directly. PMID:24853098

  16. Development of Probabilistic Rigid Pavement Design Methodologies for Military Airfields.

    DTIC Science & Technology

    1983-12-01

    4A161102AT22, Task AO, Work Unit 009, "Methodology for Considering Material Variability in Pavement Design." OCE Project Monitor was Mr. S. S. Gillespie. The...PREFACE. .. ............................. VOLUME 1: STATE OF THE ART VARIABILITY OF AIRFIELD PAVEMENT MATERIALS VOLUME 11: MATHEMATICAL FORMULATION OF...VOLUME IV: PROBABILISTIC ANALYSIS OF RIGID AIRFIELD DESIGN BY ELASTIC LAYERED THEORY VOLUME I STATE OF THE ART VARIABILITY OF AIRFIELD PAVEMENT MATERIALS

  17. Towards an Artificial Space Object Taxonomy

    DTIC Science & Technology

    2013-09-01

    demonstrate how to implement this taxonomy in Figaro, an open source probabilistic programming language. 2. INTRODUCTION Currently, US Space Command...Taxonomy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...demonstrate how to implement this taxonomy in Figaro, an open source probabilistic programming language. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF

  18. Effects of delay and probability combinations on discounting in humans.

    PubMed

    Cox, David J; Dallery, Jesse

    2016-10-01

    To determine discount rates, researchers typically adjust the amount of an immediate or certain option relative to a delayed or uncertain option. Because this adjusting amount method can be relatively time consuming, researchers have developed more efficient procedures. One such procedure is a 5-trial adjusting delay procedure, which measures the delay at which an amount of money loses half of its value (e.g., $1000 is valued at $500 with a 10-year delay to its receipt). Experiment 1 (n=212) used 5-trial adjusting delay or probability tasks to measure delay discounting of losses, probabilistic gains, and probabilistic losses. Experiment 2 (n=98) assessed combined probabilistic and delayed alternatives. In both experiments, we compared results from 5-trial adjusting delay or probability tasks to traditional adjusting amount procedures. Results suggest both procedures produced similar rates of probability and delay discounting in six out of seven comparisons. A magnitude effect consistent with previous research was observed for probabilistic gains and losses, but not for delayed losses. Results also suggest that delay and probability interact to determine the value of money. Five-trial methods may allow researchers to assess discounting more efficiently as well as study more complex choice scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.

  20. Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.

    PubMed

    Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon

    2013-04-15

    The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. The SIMRAND methodology: Theory and application for the simulation of research and development projects

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.

  2. Quantum Tasks with Non-maximally Quantum Channels via Positive Operator-Valued Measurement

    NASA Astrophysics Data System (ADS)

    Peng, Jia-Yin; Luo, Ming-Xing; Mo, Zhi-Wen

    2013-01-01

    By using a proper positive operator-valued measure (POVM), we present two new schemes for probabilistic transmission with non-maximally four-particle cluster states. In the first scheme, we demonstrate that two non-maximally four-particle cluster states can be used to realize probabilistically sharing an unknown three-particle GHZ-type state within either distant agent's place. In the second protocol, we demonstrate that a non-maximally four-particle cluster state can be used to teleport an arbitrary unknown multi-particle state in a probabilistic manner with appropriate unitary operations and POVM. Moreover the total success probability of these two schemes are also worked out.

  3. A probabilistic method for testing and estimating selection differences between populations

    PubMed Central

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-01-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. PMID:26463656

  4. Simple proof of the impossibility of bit commitment in generalized probabilistic theories using cone programming

    NASA Astrophysics Data System (ADS)

    Sikora, Jamie; Selby, John

    2018-04-01

    Bit commitment is a fundamental cryptographic task, in which Alice commits a bit to Bob such that she cannot later change the value of the bit, while, simultaneously, the bit is hidden from Bob. It is known that ideal bit commitment is impossible within quantum theory. In this work, we show that it is also impossible in generalized probabilistic theories (under a small set of assumptions) by presenting a quantitative trade-off between Alice's and Bob's cheating probabilities. Our proof relies crucially on a formulation of cheating strategies as cone programs, a natural generalization of semidefinite programs. In fact, using the generality of this technique, we prove that this result holds for the more general task of integer commitment.

  5. Probability misjudgment, cognitive ability, and belief in the paranormal.

    PubMed

    Musch, Jochen; Ehrenberg, Katja

    2002-05-01

    According to the probability misjudgment account of paranormal belief (Blackmore & Troscianko, 1985), believers in the paranormal tend to wrongly attribute remarkable coincidences to paranormal causes rather than chance. Previous studies have shown that belief in the paranormal is indeed positively related to error rates in probabilistic reasoning. General cognitive ability could account for a relationship between these two variables without assuming a causal role of probabilistic reasoning in the forming of paranormal beliefs, however. To test this alternative explanation, a belief in the paranormal scale (BPS) and a battery of probabilistic reasoning tasks were administered to 123 university students. Confirming previous findings, a significant correlation between BPS scores and error rates in probabilistic reasoning was observed. This relationship disappeared, however, when cognitive ability as measured by final examination grades was controlled for. Lower cognitive ability correlated substantially with belief in the paranormal. This finding suggests that differences in general cognitive performance rather than specific probabilistic reasoning skills provide the basis for paranormal beliefs.

  6. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  7. Heuristic and Analytic Processing: Age Trends and Associations with Cognitive Ability and Cognitive Styles.

    ERIC Educational Resources Information Center

    Kokis, Judite V.; Macpherson, Robyn; Toplak, Maggie E.; West, Richard F.; Stanovich, Keith E.

    2002-01-01

    Examined developmental and individual differences in tendencies to favor analytic over heuristic responses in three tasks (inductive reasoning, deduction under belief bias conditions, probabilistic reasoning) in children varying in age and cognitive ability. Found significant increases in analytic responding with development on first two tasks.…

  8. A probabilistic method for testing and estimating selection differences between populations.

    PubMed

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  9. Glucocorticoid Regulation of Food-Choice Behavior in Humans: Evidence from Cushing's Syndrome.

    PubMed

    Moeller, Scott J; Couto, Lizette; Cohen, Vanessa; Lalazar, Yelena; Makotkine, Iouri; Williams, Nia; Yehuda, Rachel; Goldstein, Rita Z; Geer, Eliza B

    2016-01-01

    The mechanisms by which glucocorticoids regulate food intake and resulting body mass in humans are not well-understood. One potential mechanism could involve modulation of reward processing, but human stress models examining effects of glucocorticoids on behavior contain important confounds. Here, we studied individuals with Cushing's syndrome, a rare endocrine disorder characterized by chronic excess endogenous glucocorticoids. Twenty-three patients with Cushing's syndrome (13 with active disease; 10 with disease in remission) and 15 controls with a comparably high body mass index (BMI) completed two simulated food-choice tasks (one with "explicit" task contingencies and one with "probabilistic" task contingencies), during which they indicated their objective preference for viewing high calorie food images vs. standardized pleasant, unpleasant, and neutral images. All participants also completed measures of food craving, and approximately half of the participants provided 24-h urine samples for assessment of cortisol and cortisone concentrations. Results showed that on the explicit task (but not the probabilistic task), participants with active Cushing's syndrome made fewer food-related choices than participants with Cushing's syndrome in remission, who in turn made fewer food-related choices than overweight controls. Corroborating this group effect, higher urine cortisone was negatively correlated with food-related choice in the subsample of all participants for whom these data were available. On the probabilistic task, despite a lack of group differences, higher food-related choice correlated with higher state and trait food craving in active Cushing's patients. Taken together, relative to overweight controls, Cushing's patients, particularly those with active disease, displayed a reduced vigor of responding for food rewards that was presumably attributable to glucocorticoid abnormalities. Beyond Cushing's, these results may have relevance for elucidating glucocorticoid contributions to food-seeking behavior, enhancing mechanistic understanding of weight fluctuations associated with oral glucocorticoid therapy and/or chronic stress, and informing the neurobiology of neuropsychiatric conditions marked by abnormal cortisol dynamics (e.g., major depression, Alzheimer's disease).

  10. Probalistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Xapsos, Michael

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to describe the radiation environment that can be expected at a specified confidence level. The task of the designer is then to choose a design that will operate in the model radiation environment. Probabilistic models have already been developed for solar proton events that describe the peak flux, event-integrated fluence and missionintegrated fluence. In addition a probabilistic model has been developed that describes the mission-integrated fluence for the Z>2 elemental spectra. This talk will focus on completing this suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 element

  11. Probabilistic Metrology Attains Macroscopic Cloning of Quantum Clocks

    NASA Astrophysics Data System (ADS)

    Gendra, B.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.; Chiribella, G.

    2014-12-01

    It has recently been shown that probabilistic protocols based on postselection boost the performances of the replication of quantum clocks and phase estimation. Here we demonstrate that the improvements in these two tasks have to match exactly in the macroscopic limit where the number of clones grows to infinity, preserving the equivalence between asymptotic cloning and state estimation for arbitrary values of the success probability. Remarkably, the cloning fidelity depends critically on the number of rationally independent eigenvalues of the clock Hamiltonian. We also prove that probabilistic metrology can simulate cloning in the macroscopic limit for arbitrary sets of states when the performance of the simulation is measured by testing small groups of clones.

  12. Acute stress selectively reduces reward sensitivity

    PubMed Central

    Berghorst, Lisa H.; Bogdan, Ryan; Frank, Michael J.; Pizzagalli, Diego A.

    2013-01-01

    Stress may promote the onset of psychopathology by disrupting reward processing. However, the extent to which stress impairs reward processing, rather than incentive processing more generally, is unclear. To evaluate the specificity of stress-induced reward processing disruption, 100 psychiatrically healthy females were administered a probabilistic stimulus selection task (PSST) that enabled comparison of sensitivity to reward-driven (Go) and punishment-driven (NoGo) learning under either “no stress” or “stress” (threat-of-shock) conditions. Cortisol samples and self-report measures were collected. Contrary to hypotheses, the groups did not differ significantly in task performance or cortisol reactivity. However, further analyses focusing only on individuals under “stress” who were high responders with regard to both cortisol reactivity and self-reported negative affect revealed reduced reward sensitivity relative to individuals tested in the “no stress” condition; importantly, these deficits were reward-specific. Overall, findings provide preliminary evidence that stress-reactive individuals show diminished sensitivity to reward, but not punishment, under stress. While such results highlight the possibility that stress-induced anhedonia might be an important mechanism linking stress to affective disorders, future studies are necessary to confirm this conjecture. PMID:23596406

  13. Combining non selective gas sensors on a mobile robot for identification and mapping of multiple chemical compounds.

    PubMed

    Bennetts, Victor Hernandez; Schaffernicht, Erik; Pomareda, Victor; Lilienthal, Achim J; Marco, Santiago; Trincavelli, Marco

    2014-09-17

    In this paper, we address the task of gas distribution modeling in scenarios where multiple heterogeneous compounds are present. Gas distribution modeling is particularly useful in emission monitoring applications where spatial representations of the gaseous patches can be used to identify emission hot spots. In realistic environments, the presence of multiple chemicals is expected and therefore, gas discrimination has to be incorporated in the modeling process. The approach presented in this work addresses the task of gas distribution modeling by combining different non selective gas sensors. Gas discrimination is addressed with an open sampling system, composed by an array of metal oxide sensors and a probabilistic algorithm tailored to uncontrolled environments. For each of the identified compounds, the mapping algorithm generates a calibrated gas distribution model using the classification uncertainty and the concentration readings acquired with a photo ionization detector. The meta parameters of the proposed modeling algorithm are automatically learned from the data. The approach was validated with a gas sensitive robot patrolling outdoor and indoor scenarios, where two different chemicals were released simultaneously. The experimental results show that the generated multi compound maps can be used to accurately predict the location of emitting gas sources.

  14. Think twice: Impulsivity and decision making in obsessive-compulsive disorder.

    PubMed

    Grassi, Giacomo; Pallanti, Stefano; Righi, Lorenzo; Figee, Martijn; Mantione, Mariska; Denys, Damiaan; Piccagliani, Daniele; Rossi, Alessandro; Stratta, Paolo

    2015-12-01

    Recent studies have challenged the anxiety-avoidance model of obsessive-compulsive disorder (OCD), linking OCD to impulsivity, risky-decision-making and reward-system dysfunction, which can also be found in addiction and might support the conceptualization of OCD as a behavioral addiction. Here, we conducted an exploratory investigation of the behavioral addiction model of OCD by assessing whether OCD patients are more impulsive, have impaired decision-making, and biased probabilistic reasoning, three core dimensions of addiction, in a sample of OCD patients and healthy controls. We assessed these dimensions on 38 OCD patients and 39 healthy controls with the Barratt Impulsiveness Scale (BIS-11), the Iowa Gambling Task (IGT) and the Beads Task. OCD patients had significantly higher BIS-11 scores than controls, in particular on the cognitive subscales. They performed significantly worse than controls on the IGT preferring immediate reward despite negative future consequences, and did not learn from losses. Finally, OCD patients demonstrated biased probabilistic reasoning as reflected by significantly fewer draws to decision than controls on the Beads Task. OCD patients are more impulsive than controls and demonstrate risky decision-making and biased probabilistic reasoning. These results might suggest that other conceptualizations of OCD, such as the behavioral addiction model, may be more suitable than the anxiety-avoidance one. However, further studies directly comparing OCD and behavioral addiction patients are needed in order to scrutinize this model.

  15. Procedural learning: A developmental study of motor sequence learning and probabilistic classification learning in school-aged children.

    PubMed

    Mayor-Dubois, Claire; Zesiger, Pascal; Van der Linden, Martial; Roulet-Perez, Eliane

    2016-01-01

    In this study, we investigated motor and cognitive procedural learning in typically developing children aged 8-12 years with a serial reaction time (SRT) task and a probabilistic classification learning (PCL) task. The aims were to replicate and extend the results of previous SRT studies, to investigate PCL in school-aged children, to explore the contribution of declarative knowledge to SRT and PCL performance, to explore the strategies used by children in the PCL task via a mathematical model, and to see whether performances obtained in motor and cognitive tasks correlated. The results showed similar learning effects in the three age groups in the SRT and in the first half of the PCL tasks. Participants did not develop explicit knowledge in the SRT task whereas declarative knowledge of the cue-outcome associations correlated with the performances in the second half of the PCL task, suggesting a participation of explicit knowledge after some time of exposure in PCL. An increasing proportion of the optimal strategy use with increasing age was observed in the PCL task. Finally, no correlation appeared between cognitive and motor performance. In conclusion, we extended the hypothesis of age invariance from motor to cognitive procedural learning, which had not been done previously. The ability to adopt more efficient learning strategies with age may rely on the maturation of the fronto-striatal loops. The lack of correlation between performance in the SRT task and the first part of the PCL task suggests dissociable developmental trajectories within the procedural memory system.

  16. Discounting of food, sex, and money.

    PubMed

    Holt, Daniel D; Newquist, Matthew H; Smits, Rochelle R; Tiry, Andrew M

    2014-06-01

    Discounting is a useful framework for understanding choice involving a range of delayed and probabilistic outcomes (e.g., money, food, drugs), but relatively few studies have examined how people discount other commodities (e.g., entertainment, sex). Using a novel discounting task, where the length of a line represented the value of an outcome and was adjusted using a staircase procedure, we replicated previous findings showing that individuals discount delayed and probabilistic outcomes in a manner well described by a hyperbola-like function. In addition, we found strong positive correlations between discounting rates of delayed, but not probabilistic, outcomes. This suggests that discounting of delayed outcomes may be relatively predictable across outcome types but that discounting of probabilistic outcomes may depend more on specific contexts. The generality of delay discounting and potential context dependence of probability discounting may provide important information regarding factors contributing to choice behavior.

  17. Emergence of spontaneous anticipatory hand movements in a probabilistic environment

    PubMed Central

    Bruhn, Pernille

    2013-01-01

    In this article, we present a novel experimental approach to the study of anticipation in probabilistic cuing. We implemented a modified spatial cuing task in which participants made an anticipatory hand movement toward one of two probabilistic targets while the (x, y)-computer mouse coordinates of their hand movements were sampled. This approach allowed us to tap into anticipatory processes as they occurred, rather than just measuring their behavioral outcome through reaction time to the target. In different conditions, we varied the participants’ degree of certainty of the upcoming target position with probabilistic pre-cues. We found that participants initiated spontaneous anticipatory hand movements in all conditions, even when they had no information on the position of the upcoming target. However, participants’ hand position immediately before the target was affected by the degree of certainty concerning the target’s position. This modulation of anticipatory hand movements emerged rapidly in most participants as they encountered a constant probabilistic relation between a cue and an upcoming target position over the course of the experiment. Finally, we found individual differences in the way anticipatory behavior was modulated with an uncertain/neutral cue. Implications of these findings for probabilistic spatial cuing are discussed. PMID:23833694

  18. Probabilistic drug connectivity mapping

    PubMed Central

    2014-01-01

    Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351

  19. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  20. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  1. Visuo-Motor and Cognitive Procedural Learning in Children with Basal Ganglia Pathology

    ERIC Educational Resources Information Center

    Mayor-Dubois, C.; Maeder, P.; Zesiger, P.; Roulet-Perez, E.

    2010-01-01

    We investigated procedural learning in 18 children with basal ganglia (BG) lesions or dysfunctions of various aetiologies, using a visuo-motor learning test, the Serial Reaction Time (SRT) task, and a cognitive learning test, the Probabilistic Classification Learning (PCL) task. We compared patients with early (less than 1 year old, n=9), later…

  2. Energy and Power Aware Computing Through Management of Computational Entropy

    DTIC Science & Technology

    2008-01-01

    18 2.4.1 ACIP living framework forum task...This research focused on two sub- tasks: (1) Assessing the need and planning for a potential “Living Framework Forum ” (LFF) software architecture...probabilistic switching with plausible device realizations to save energy in our patent application [35]. In [35], we showed an introverted switch in

  3. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.

  4. Selective Estrogen Receptor Modulation Increases Hippocampal Activity during Probabilistic Association Learning in Schizophrenia

    PubMed Central

    Kindler, Jochen; Weickert, Cynthia Shannon; Skilleter, Ashley J; Catts, Stanley V; Lenroot, Rhoshel; Weickert, Thomas W

    2015-01-01

    People with schizophrenia show probabilistic association learning impairment in conjunction with abnormal neural activity. The selective estrogen receptor modulator (SERM) raloxifene preserves neural activity during memory in healthy older men and improves memory in schizophrenia. Here, we tested the extent to which raloxifene modifies neural activity during learning in schizophrenia. Nineteen people with schizophrenia participated in a twelve-week randomized, double-blind, placebo-controlled, cross-over adjunctive treatment trial of the SERM raloxifene administered orally at 120 mg daily to assess brain activity during probabilistic association learning using functional magnetic resonance imaging (fMRI). Raloxifene improved probabilistic association learning and significantly increased fMRI BOLD activity in the hippocampus and parahippocampal gyrus relative to placebo. A separate region of interest confirmatory analysis in 21 patients vs 36 healthy controls showed a positive association between parahippocampal neural activity and learning in patients, but no such relationship in the parahippocampal gyrus of healthy controls. Thus, selective estrogen receptor modulation by raloxifene concurrently increases activity in the parahippocampal gyrus and improves probabilistic association learning in schizophrenia. These results support a role for estrogen receptor modulation of mesial temporal lobe neural activity in the remediation of learning disabilities in both men and women with schizophrenia. PMID:25829142

  5. An investigation into the probabilistic combination of quasi-static and random accelerations

    NASA Technical Reports Server (NTRS)

    Schock, R. W.; Tuell, L. P.

    1984-01-01

    The development of design load factors for aerospace and aircraft components and experiment support structures, which are subject to a simultaneous vehicle dynamic vibration (quasi-static) and acoustically generated random vibration, require the selection of a combination methodology. Typically, the procedure is to define the quasi-static and the random generated response separately, and arithmetically add or root sum square to get combined accelerations. Since the combination of a probabilistic and a deterministic function yield a probabilistic function, a viable alternate approach would be to determine the characteristics of the combined acceleration probability density function and select an appropriate percentile level for the combined acceleration. The following paper develops this mechanism and provides graphical data to select combined accelerations for most popular percentile levels.

  6. Simulation Of Research And Development Projects

    NASA Technical Reports Server (NTRS)

    Miles, Ralph F.

    1987-01-01

    Measures of preference for alternative project plans calculated. Simulation of Research and Development Projects (SIMRAND) program aids in optimal allocation of research and development resources needed to achieve project goals. Models system subsets or project tasks as various network paths to final goal. Each path described in terms of such task variables as cost per hour, cost per unit, and availability of resources. Uncertainty incorporated by treating task variables as probabilistic random variables. Written in Microsoft FORTRAN 77.

  7. Impaired sequential and partially compensated probabilistic skill learning in Parkinson's disease.

    PubMed

    Kemény, Ferenc; Demeter, Gyula; Racsmány, Mihály; Valálik, István; Lukács, Ágnes

    2018-06-08

    The striatal dopaminergic dysfunction in Parkinson's disease (PD) has been associated with deficits in skill learning in numerous studies, but some of the findings remain controversial. Our aim was to explore the generality of the learning deficit using two widely reported skill learning tasks in the same group of Parkinson's patients. Thirty-four patients with PD (mean age: 62.83 years, SD: 7.67) were compared to age-matched healthy adults. Two tasks were employed: the Serial Reaction Time Task (SRT), testing the learning of motor sequences, and the Weather Prediction (WP) task, testing non-sequential probabilistic category learning. On the SRT task, patients with PD showed no significant evidence for sequence learning. These results support and also extend previous findings, suggesting that motor skill learning is vulnerable in PD. On the WP task, the PD group showed the same amount of learning as controls, but they exploited qualitatively different strategies in predicting the target categories. While controls typically combined probabilities from multiple predicting cues, patients with PD instead focused on individual cues. We also found moderate to high correlations between the different measures of skill learning. These findings support our hypothesis that skill learning is generally impaired in PD, and can in some cases be compensated by relying on alternative learning strategies. © 2018 The Authors. Journal of Neuropsychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  8. Stimulus-response learning in long-term cocaine users: acquired equivalence and probabilistic category learning.

    PubMed

    Vadhan, Nehal P; Myers, Catherine E; Rubin, Eric; Shohamy, Daphna; Foltin, Richard W; Gluck, Mark A

    2008-01-11

    The purpose of this study was to examine stimulus-response (S-R) learning in active cocaine users. Twenty-two cocaine-dependent participants (20 males and 2 females) and 21 non-drug using control participants (19 males and 2 females) who were similar in age and education were administered two computerized learning tasks. The Acquired Equivalence task initially requires learning of simple antecedent-consequent discriminations, but later requires generalization of this learning when the stimuli are presented in novel recombinations. The Weather Prediction task requires the prediction of a dichotomous outcome based on different stimuli combinations when the stimuli predict the outcome only probabilistically. On the Acquired Equivalence task, cocaine users made significantly more errors than control participants when required to learn new discriminations while maintaining previously learned discriminations, but performed similarly to controls when required to generalize this learning. No group differences were seen on the Weather Prediction task. Cocaine users' learning of stimulus discriminations under conflicting response demands was impaired, but their ability to generalize this learning once they achieved criterion was intact. This performance pattern is consistent with other laboratory studies of long-term cocaine users that demonstrated that established learning interfered with new learning on incremental learning tasks, relative to healthy controls, and may reflect altered dopamine transmission in the basal ganglia of long-term cocaine users.

  9. White matter integrity of the medial forebrain bundle and attention and working memory deficits following traumatic brain injury.

    PubMed

    Owens, Jacqueline A; Spitz, Gershon; Ponsford, Jennie L; Dymowski, Alicia R; Ferris, Nicholas; Willmott, Catherine

    2017-02-01

    The medial forebrain bundle (MFB) contains ascending catecholamine fibers that project to the prefrontal cortex (PFC). Damage to these fibers following traumatic brain injury (TBI) may alter extracellular catecholamine levels in the PFC and impede attention and working memory ability. This study investigated white matter microstructure of the medial MFB, specifically the supero-lateral branch (slMFB), following TBI, and its association with performance on attention and working memory tasks. Neuropsychological measures of attention and working memory were administered to 20 moderate-severe participants with TBI (posttraumatic amnesia M  = 40.05 ± 37.10 days, median time since injury 10.48 months, range 3.72-87.49) and 20 healthy controls. Probabilistic tractography was used to obtain fractional anisotropy (FA) and mean diffusivity (MD) values for 17 participants with TBI and 20 healthy controls. When compared to controls, participants with TBI were found to have significantly lower FA ( p  < .001) and higher MD ( p  < .001) slMFB values, and they were slower to complete tasks including Trail Making Task-A, Hayling, selective attention task, n -back, and Symbol Digit Modalities Test. This study was the first to demonstrate microstructural white matter damage within the slMFB following TBI. However, no evidence was found for an association of alterations to this tract and performance on attentional tasks.

  10. Changing ideas about others’ intentions: updating prior expectations tunes activity in the human motor system

    PubMed Central

    Jacquet, Pierre O.; Roy, Alice C.; Chambon, Valérian; Borghi, Anna M.; Salemme, Roméo; Farnè, Alessandro; Reilly, Karen T.

    2016-01-01

    Predicting intentions from observing another agent’s behaviours is often thought to depend on motor resonance – i.e., the motor system’s response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers’ prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others’ intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction. PMID:27243157

  11. Pharmacological Fingerprints of Contextual Uncertainty

    PubMed Central

    Ruge, Diane; Stephan, Klaas E.

    2016-01-01

    Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses. PMID:27846219

  12. Changing ideas about others' intentions: updating prior expectations tunes activity in the human motor system.

    PubMed

    Jacquet, Pierre O; Roy, Alice C; Chambon, Valérian; Borghi, Anna M; Salemme, Roméo; Farnè, Alessandro; Reilly, Karen T

    2016-05-31

    Predicting intentions from observing another agent's behaviours is often thought to depend on motor resonance - i.e., the motor system's response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers' prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others' intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction.

  13. Adaptation can explain evidence for encoding of probabilistic information in macaque inferior temporal cortex.

    PubMed

    Vinken, Kasper; Vogels, Rufin

    2017-11-20

    In predictive coding theory, the brain is conceptualized as a prediction machine that constantly constructs and updates expectations of the sensory environment [1]. In the context of this theory, Bell et al.[2] recently studied the effect of the probability of task-relevant stimuli on the activity of macaque inferior temporal (IT) neurons and observed a reduced population response to expected faces in face-selective neurons. They concluded that "IT neurons encode long-term, latent probabilistic information about stimulus occurrence", supporting predictive coding. They manipulated expectation by the frequency of face versus fruit stimuli in blocks of trials. With such a design, stimulus repetition is confounded with expectation. As previous studies showed that IT neurons decrease their response with repetition [3], such adaptation (or repetition suppression), instead of expectation suppression as assumed by the authors, could explain their effects. The authors attempted to control for this alternative interpretation with a multiple regression approach. Here we show by using simulation that adaptation can still masquerade as expectation effects reported in [2]. Further, the results from the regression model used for most analyses cannot be trusted, because the model is not uniquely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Rapid target foraging with reach or gaze: The hand looks further ahead than the eye

    PubMed Central

    2017-01-01

    Real-world tasks typically consist of a series of target-directed actions and often require choices about which targets to act on and in what order. Such choice behavior can be assessed from an optimal foraging perspective whereby target selection is shaped by a balance between rewards and costs. Here we evaluated such decision-making in a rapid movement foraging task. On a given trial, participants were presented with 15 targets of varying size and value and were instructed to harvest as much reward as possible by either moving a handle to the targets (hand task) or by briefly fixating them (eye task). The short trial duration enabled participants to harvest about half the targets, ensuring that total reward was due to choice behavior. We developed a probabilistic model to predict target-by-target harvesting choices that considered the rewards and movement-related costs (i.e., target distance and size) associated with the current target as well as future targets. In the hand task, in comparison to the eye task, target choice was more strongly influenced by movement-related costs and took into account a greater number of future targets, consistent with the greater costs associated with arm movement. In both tasks, participants exhibited near-optimal behaviour and in a constrained version of the hand task in which choices could only be based on target positions, participants consistently chose among the shortest movement paths. Our results demonstrate that people can rapidly and effectively integrate values and movement-related costs associated with current and future targets when sequentially harvesting targets. PMID:28683138

  15. Linguistic Constraints on Statistical Word Segmentation: The Role of Consonants in Arabic and English

    ERIC Educational Resources Information Center

    Kastner, Itamar; Adriaans, Frans

    2018-01-01

    Statistical learning is often taken to lie at the heart of many cognitive tasks, including the acquisition of language. One particular task in which probabilistic models have achieved considerable success is the segmentation of speech into words. However, these models have mostly been tested against English data, and as a result little is known…

  16. Probabilistic grammatical model for helix‐helix contact site classification

    PubMed Central

    2013-01-01

    Background Hidden Markov Models power many state‐of‐the‐art tools in the field of protein bioinformatics. While excelling in their tasks, these methods of protein analysis do not convey directly information on medium‐ and long‐range residue‐residue interactions. This requires an expressive power of at least context‐free grammars. However, application of more powerful grammar formalisms to protein analysis has been surprisingly limited. Results In this work, we present a probabilistic grammatical framework for problem‐specific protein languages and apply it to classification of transmembrane helix‐helix pairs configurations. The core of the model consists of a probabilistic context‐free grammar, automatically inferred by a genetic algorithm from only a generic set of expert‐based rules and positive training samples. The model was applied to produce sequence based descriptors of four classes of transmembrane helix‐helix contact site configurations. The highest performance of the classifiers reached AUCROC of 0.70. The analysis of grammar parse trees revealed the ability of representing structural features of helix‐helix contact sites. Conclusions We demonstrated that our probabilistic context‐free framework for analysis of protein sequences outperforms the state of the art in the task of helix‐helix contact site classification. However, this is achieved without necessarily requiring modeling long range dependencies between interacting residues. A significant feature of our approach is that grammar rules and parse trees are human‐readable. Thus they could provide biologically meaningful information for molecular biologists. PMID:24350601

  17. Think twice: Impulsivity and decision making in obsessive–compulsive disorder

    PubMed Central

    Grassi, Giacomo; Pallanti, Stefano; Righi, Lorenzo; Figee, Martijn; Mantione, Mariska; Denys, Damiaan; Piccagliani, Daniele; Rossi, Alessandro; Stratta, Paolo

    2015-01-01

    Background and Aims Recent studies have challenged the anxiety-avoidance model of obsessive–compulsive disorder (OCD), linking OCD to impulsivity, risky-decision-making and reward-system dysfunction, which can also be found in addiction and might support the conceptualization of OCD as a behavioral addiction. Here, we conducted an exploratory investigation of the behavioral addiction model of OCD by assessing whether OCD patients are more impulsive, have impaired decision-making, and biased probabilistic reasoning, three core dimensions of addiction, in a sample of OCD patients and healthy controls. Methods We assessed these dimensions on 38 OCD patients and 39 healthy controls with the Barratt Impulsiveness Scale (BIS-11), the Iowa Gambling Task (IGT) and the Beads Task. Results OCD patients had significantly higher BIS-11 scores than controls, in particular on the cognitive subscales. They performed significantly worse than controls on the IGT preferring immediate reward despite negative future consequences, and did not learn from losses. Finally, OCD patients demonstrated biased probabilistic reasoning as reflected by significantly fewer draws to decision than controls on the Beads Task. Conclusions OCD patients are more impulsive than controls and demonstrate risky decision-making and biased probabilistic reasoning. These results might suggest that other conceptualizations of OCD, such as the behavioral addiction model, may be more suitable than the anxiety-avoidance one. However, further studies directly comparing OCD and behavioral addiction patients are needed in order to scrutinize this model. PMID:26690621

  18. Design of Composite Structures for Reliability and Damage Tolerance

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1999-01-01

    A summary of research conducted during the first year is presented. The research objectives were sought by conducting two tasks: (1) investigation of probabilistic design techniques for reliability-based design of composite sandwich panels, and (2) examination of strain energy density failure criterion in conjunction with response surface methodology for global-local design of damage tolerant helicopter fuselage structures. This report primarily discusses the efforts surrounding the first task and provides a discussion of some preliminary work involving the second task.

  19. Students’ difficulties in probabilistic problem-solving

    NASA Astrophysics Data System (ADS)

    Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.

    2018-03-01

    There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.

  20. Rasagiline in the Treatment of the Persistent Negative Symptoms of Schizophrenia.

    PubMed

    Buchanan, Robert W; Weiner, Elaine; Kelly, Deanna L; Gold, James M; Keller, William R; Waltz, James A; McMahon, Robert P; Gorelick, David A

    2015-07-01

    The current study examined the efficacy and safety of rasagiline, a selective MAO-B inhibitor, for the treatment of persistent negative symptoms. Sixty people with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, schizophrenia or schizoaffective disorder, who met a priori criteria for persistent negative symptoms, were randomized to receive rasagiline, 1mg/d (n = 31) or placebo (n = 29) in a 12-week, double-blind, placebo-controlled clinical trial. The Scale for the Assessment of Negative Symptoms (SANS) total score was used to assess change in negative symptoms. The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS), N-Back test, a probabilistic learning task, and a delayed discounting task were used to assess cognition. In a mixed model analysis of covariance (MM-ANCOVA), with time as a continuous variable, there was a significant treatment × time effect for SANS total score (F = 5.61(df = 1,40.3), P = .023). The treatment × time interaction effect was also significant for the SANS avolition subscale score (F(1,40.2) = 10.41, P = .002). In a post hoc MM-ANCOVA analyses, with time as a categorical variable, group differences were significant at week 12 for SANS total score (t(37.3) = 2.15; P = .04; d = -0.41) and SANS avolition subscale score (t(49.0) = 3.06; P = .004; d = -0.46). There was a significant difference in number of participants with a ≥20% reduction in SANS avolition score (χ(2)(1) = 10.94; P = .0009), but not in SANS total score (χ(2)(1) = 1.11; P = .29). There were no significant group differences on the RBANS, N-Back, probabilistic learning, or delayed discounting tasks. Study results support future studies of the utility of rasagiline for the treatment of negative symptoms, including avolition (clinicaltrials.gov trial number: NCT00492336). © The Author 2014. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  1. Rasagiline in the Treatment of the Persistent Negative Symptoms of Schizophrenia

    PubMed Central

    Buchanan, Robert W.; Weiner, Elaine; Kelly, Deanna L.; Gold, James M.; Keller, William R.; Waltz, James A.; McMahon, Robert P.; Gorelick, David A.

    2015-01-01

    Objective: The current study examined the efficacy and safety of rasagiline, a selective MAO-B inhibitor, for the treatment of persistent negative symptoms. Methods: Sixty people with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, schizophrenia or schizoaffective disorder, who met a priori criteria for persistent negative symptoms, were randomized to receive rasagiline, 1mg/d (n = 31) or placebo (n = 29) in a 12-week, double-blind, placebo-controlled clinical trial. The Scale for the Assessment of Negative Symptoms (SANS) total score was used to assess change in negative symptoms. The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS), N-Back test, a probabilistic learning task, and a delayed discounting task were used to assess cognition. Results: In a mixed model analysis of covariance (MM-ANCOVA), with time as a continuous variable, there was a significant treatment × time effect for SANS total score (F = 5.61(df = 1,40.3), P = .023). The treatment × time interaction effect was also significant for the SANS avolition subscale score (F(1,40.2) = 10.41, P = .002). In a post hoc MM-ANCOVA analyses, with time as a categorical variable, group differences were significant at week 12 for SANS total score (t(37.3) = 2.15; P = .04; d = −0.41) and SANS avolition subscale score (t(49.0) = 3.06; P = .004; d = −0.46). There was a significant difference in number of participants with a ≥20% reduction in SANS avolition score (χ2(1) = 10.94; P = .0009), but not in SANS total score (χ2(1) = 1.11; P = .29). There were no significant group differences on the RBANS, N-Back, probabilistic learning, or delayed discounting tasks. Conclusions: Study results support future studies of the utility of rasagiline for the treatment of negative symptoms, including avolition (clinicaltrials.gov trial number: NCT00492336). PMID:25368372

  2. Impaired insight in cocaine addiction: laboratory evidence and effects on cocaine-seeking behaviour

    PubMed Central

    Maloney, Thomas; Parvaz, Muhammad A.; Alia-Klein, Nelly; Woicik, Patricia A.; Telang, Frank; Wang, Gene-Jack; Volkow, Nora D.; Goldstein, Rita Z.

    2010-01-01

    Neuropsychiatric disorders are often characterized by impaired insight into behaviour. Such an insight deficit has been suggested, but never directly tested, in drug addiction. Here we tested for the first time this impaired insight hypothesis in drug addiction, and examined its potential association with drug-seeking behaviour. We also tested potential modulation of these effects by cocaine urine status, an individual difference known to impact underlying cognitive functions and prognosis. Sixteen cocaine addicted individuals testing positive for cocaine in urine, 26 cocaine addicted individuals testing negative for cocaine in urine, and 23 healthy controls completed a probabilistic choice task that assessed objective preference for viewing four types of pictures (pleasant, unpleasant, neutral and cocaine). This choice task concluded by asking subjects to report their most selected picture type; correspondence between subjects’ self-reports with their objective choice behaviour provided our index of behavioural insight. Results showed that the urine positive cocaine subjects exhibited impaired insight into their own choice behaviour compared with healthy controls; this same study group also selected the most cocaine pictures (and fewest pleasant pictures) for viewing. Importantly, however, it was the urine negative cocaine subjects whose behaviour was most influenced by insight, such that impaired insight in this subgroup only was associated with higher cocaine-related choice on the task and more severe actual cocaine use. These findings suggest that interventions to enhance insight may decrease drug-seeking behaviour, especially in urine negative cocaine subjects, potentially to improve their longer-term clinical outcomes. PMID:20395264

  3. Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.

  4. The Effect of Motor Difficulty on the Acquisition of a Computer Task: A Comparison between Young and Older Adults

    ERIC Educational Resources Information Center

    Fezzani, K.; Albinet, C.; Thon, B.; Marquie, J. -C.

    2010-01-01

    The present study investigated the extent to which the impact of motor difficulty on the acquisition of a computer task varies as a function of age. Fourteen young and 14 older participants performed 352 sequences of 10 serial pointing movements with a wireless pen on a digitiser tablet. A conditional probabilistic structure governed the…

  5. Temporal and probabilistic discounting of rewards in children and adolescents: effects of age and ADHD symptoms.

    PubMed

    Scheres, Anouk; Dijkstra, Marianne; Ainslie, Eleanor; Balkan, Jaclyn; Reynolds, Brady; Sonuga-Barke, Edmund; Castellanos, F Xavier

    2006-01-01

    This study investigated whether age and ADHD symptoms affected choice preferences in children and adolescents when they chose between (1) small immediate rewards and larger delayed rewards and (2) small certain rewards and larger probabilistic uncertain rewards. A temporal discounting (TD) task and a probabilistic discounting (PD) task were used to measure the degree to which the subjective value of a large reward decreased as one had to wait longer for it (TD), and as the probability of obtaining it decreased (PD). Rewards used were small amounts of money. In the TD task, the large reward (10 cents) was delayed by between 0 and 30s, and the immediate reward varied in magnitude (0-10 cents). In the PD task, receipt of the large reward (10 cents) varied in likelihood, with probabilities of 0, 0.25, 0.5, 0.75, and 1.0 used, and the certain reward varied in magnitude (0-10 cents). Age and diagnostic group did not affect the degree of PD of rewards: All participants made choices so that total gains were maximized. As predicted, young children, aged 6-11 years (n = 25) demonstrated steeper TD of rewards than adolescents, aged 12-17 years (n = 21). This effect remained significant even when choosing the immediate reward did not shorten overall task duration. This, together with the lack of interaction between TD task version and age, suggests that steeper discounting in young children is driven by reward immediacy and not by delay aversion. Contrary to our predictions, participants with ADHD (n = 22) did not demonstrate steeper TD of rewards than controls (n = 24). These results raise the possibility that strong preferences for small immediate rewards in ADHD, as found in previous research, depend on factors such as total maximum gain and the use of fixed versus varied delay durations. The decrease in TD as observed in adolescents compared to children may be related to developmental changes in the (dorsolateral) prefrontal cortex. Future research needs to investigate these possibilities.

  6. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  7. Psychics, aliens, or experience? Using the Anomalistic Belief Scale to examine the relationship between type of belief and probabilistic reasoning.

    PubMed

    Prike, Toby; Arnold, Michelle M; Williamson, Paul

    2017-08-01

    A growing body of research has shown people who hold anomalistic (e.g., paranormal) beliefs may differ from nonbelievers in their propensity to make probabilistic reasoning errors. The current study explored the relationship between these beliefs and performance through the development of a new measure of anomalistic belief, called the Anomalistic Belief Scale (ABS). One key feature of the ABS is that it includes a balance of both experiential and theoretical belief items. Another aim of the study was to use the ABS to investigate the relationship between belief and probabilistic reasoning errors on conjunction fallacy tasks. As expected, results showed there was a relationship between anomalistic belief and propensity to commit the conjunction fallacy. Importantly, regression analyses on the factors that make up the ABS showed that the relationship between anomalistic belief and probabilistic reasoning occurred only for beliefs about having experienced anomalistic phenomena, and not for theoretical anomalistic beliefs. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  9. Measuring reinforcement learning and motivation constructs in experimental animals: relevance to the negative symptoms of schizophrenia.

    PubMed

    Markou, Athina; Salamone, John D; Bussey, Timothy J; Mar, Adam C; Brunner, Daniela; Gilmour, Gary; Balsam, Peter

    2013-11-01

    The present review article summarizes and expands upon the discussions that were initiated during a meeting of the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS; http://cntrics.ucdavis.edu) meeting. A major goal of the CNTRICS meeting was to identify experimental procedures and measures that can be used in laboratory animals to assess psychological constructs that are related to the psychopathology of schizophrenia. The issues discussed in this review reflect the deliberations of the Motivation Working Group of the CNTRICS meeting, which included most of the authors of this article as well as additional participants. After receiving task nominations from the general research community, this working group was asked to identify experimental procedures in laboratory animals that can assess aspects of reinforcement learning and motivation that may be relevant for research on the negative symptoms of schizophrenia, as well as other disorders characterized by deficits in reinforcement learning and motivation. The tasks described here that assess reinforcement learning are the Autoshaping Task, Probabilistic Reward Learning Tasks, and the Response Bias Probabilistic Reward Task. The tasks described here that assess motivation are Outcome Devaluation and Contingency Degradation Tasks and Effort-Based Tasks. In addition to describing such methods and procedures, the present article provides a working vocabulary for research and theory in this field, as well as an industry perspective about how such tasks may be used in drug discovery. It is hoped that this review can aid investigators who are conducting research in this complex area, promote translational studies by highlighting shared research goals and fostering a common vocabulary across basic and clinical fields, and facilitate the development of medications for the treatment of symptoms mediated by reinforcement learning and motivational deficits. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Measuring reinforcement learning and motivation constructs in experimental animals: relevance to the negative symptoms of schizophrenia

    PubMed Central

    Markou, Athina; Salamone, John D.; Bussey, Timothy; Mar, Adam; Brunner, Daniela; Gilmour, Gary; Balsam, Peter

    2013-01-01

    The present review article summarizes and expands upon the discussions that were initiated during a meeting of the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS; http://cntrics.ucdavis.edu). A major goal of the CNTRICS meeting was to identify experimental procedures and measures that can be used in laboratory animals to assess psychological constructs that are related to the psychopathology of schizophrenia. The issues discussed in this review reflect the deliberations of the Motivation Working Group of the CNTRICS meeting, which included most of the authors of this article as well as additional participants. After receiving task nominations from the general research community, this working group was asked to identify experimental procedures in laboratory animals that can assess aspects of reinforcement learning and motivation that may be relevant for research on the negative symptoms of schizophrenia, as well as other disorders characterized by deficits in reinforcement learning and motivation. The tasks described here that assess reinforcement learning are the Autoshaping Task, Probabilistic Reward Learning Tasks, and the Response Bias Probabilistic Reward Task. The tasks described here that assess motivation are Outcome Devaluation and Contingency Degradation Tasks and Effort-Based Tasks. In addition to describing such methods and procedures, the present article provides a working vocabulary for research and theory in this field, as well as an industry perspective about how such tasks may be used in drug discovery. It is hoped that this review can aid investigators who are conducting research in this complex area, promote translational studies by highlighting shared research goals and fostering a common vocabulary across basic and clinical fields, and facilitate the development of medications for the treatment of symptoms mediated by reinforcement learning and motivational deficits. PMID:23994273

  11. The role of linguistic experience in the processing of probabilistic information in production.

    PubMed

    Gustafson, Erin; Goldrick, Matthew

    2018-01-01

    Speakers track the probability that a word will occur in a particular context and utilize this information during phonetic processing. For example, content words that have high probability within a discourse tend to be realized with reduced acoustic/articulatory properties. Such probabilistic information may influence L1 and L2 speech processing in distinct ways (reflecting differences in linguistic experience across groups and the overall difficulty of L2 speech processing). To examine this issue, L1 and L2 speakers performed a referential communication task, describing sequences of simple actions. The two groups of speakers showed similar effects of discourse-dependent probabilistic information on production, suggesting that L2 speakers can successfully track discourse-dependent probabilities and use such information to modulate phonetic processing.

  12. Attenuation of dopamine-modulated prefrontal value signals underlies probabilistic reward learning deficits in old age

    PubMed Central

    Axelsson, Jan; Riklund, Katrine; Nyberg, Lars; Dayan, Peter; Bäckman, Lars

    2017-01-01

    Probabilistic reward learning is characterised by individual differences that become acute in aging. This may be due to age-related dopamine (DA) decline affecting neural processing in striatum, prefrontal cortex, or both. We examined this by administering a probabilistic reward learning task to younger and older adults, and combining computational modelling of behaviour, fMRI and PET measurements of DA D1 availability. We found that anticipatory value signals in ventromedial prefrontal cortex (vmPFC) were attenuated in older adults. The strength of this signal predicted performance beyond age and was modulated by D1 availability in nucleus accumbens. These results uncover that a value-anticipation mechanism in vmPFC declines in aging, and that this mechanism is associated with DA D1 receptor availability. PMID:28870286

  13. Risk taking and adult attention deficit/hyperactivity disorder: A gap between real life behavior and experimental decision making.

    PubMed

    Pollak, Yehuda; Shalit, Reut; Aran, Adi

    2018-01-01

    Adults with attention deficit/hyperactivity disorder (ADHD) are prone to suboptimal decision making and risk taking. The aim of this study was to test performance on a theoretically-based probabilistic decision making task in well-characterized adults with and without ADHD, and examine the relation between experimental risk taking and history of real-life risk-taking behavior, defined as cigarette, alcohol, and street drug use. University students with and without ADHD completed a modified version of the Cambridge Gambling Test, in which they had to choose between alternatives varied by level of risk, and reported their history of substance use. Both groups showed similar patterns of risk taking on the experimental decision making task, suggesting that ADHD is not linked to low sensitivity to risk. Past and present substance use was more prevalent in adults with ADHD. These finding question the validity of experimental probabilistic decision making task as a valid model for ADHD-related risk-taking behavior. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Agreement With Conjoined NPs Reflects Language Experience.

    PubMed

    Lorimor, Heidi; Adams, Nora C; Middleton, Erica L

    2018-01-01

    An important question within psycholinguistic research is whether grammatical features, such as number values on nouns, are probabilistic or discrete. Similarly, researchers have debated whether grammatical specifications are only set for individual lexical items, or whether certain types of noun phrases (NPs) also obtain number valuations at the phrasal level. Through a corpus analysis and an oral production task, we show that conjoined NPs can take both singular and plural verb agreement and that notional number (i.e., the numerosity of the referent of the subject noun phrase) plays an important role in agreement with conjoined NPs. In two written production tasks, we show that participants who are exposed to plural (versus singular or unmarked) agreement with conjoined NPs in a biasing story are more likely to produce plural agreement with conjoined NPs on a subsequent production task. This suggests that, in addition to their sensitivity to notional information, conjoined NPs have probabilistic grammatical specifications that reflect their distributional properties in language. These results provide important evidence that grammatical number reflects language experience, and that this language experience impacts agreement at the phrasal level, and not just the lexical level.

  15. Agreement With Conjoined NPs Reflects Language Experience

    PubMed Central

    Lorimor, Heidi; Adams, Nora C.; Middleton, Erica L.

    2018-01-01

    An important question within psycholinguistic research is whether grammatical features, such as number values on nouns, are probabilistic or discrete. Similarly, researchers have debated whether grammatical specifications are only set for individual lexical items, or whether certain types of noun phrases (NPs) also obtain number valuations at the phrasal level. Through a corpus analysis and an oral production task, we show that conjoined NPs can take both singular and plural verb agreement and that notional number (i.e., the numerosity of the referent of the subject noun phrase) plays an important role in agreement with conjoined NPs. In two written production tasks, we show that participants who are exposed to plural (versus singular or unmarked) agreement with conjoined NPs in a biasing story are more likely to produce plural agreement with conjoined NPs on a subsequent production task. This suggests that, in addition to their sensitivity to notional information, conjoined NPs have probabilistic grammatical specifications that reflect their distributional properties in language. These results provide important evidence that grammatical number reflects language experience, and that this language experience impacts agreement at the phrasal level, and not just the lexical level. PMID:29725311

  16. Attenuating GABAA Receptor Signaling in Dopamine Neurons Selectively Enhances Reward Learning and Alters Risk Preference in Mice

    PubMed Central

    Parker, Jones G.; Wanat, Matthew J.; Soden, Marta E.; Ahmad, Kinza; Zweifel, Larry S.; Bamford, Nigel S.; Palmiter, Richard D.

    2011-01-01

    Phasic dopamine transmission encodes the value of reward-predictive stimuli and influences both learning and decision-making. Altered dopamine signaling is associated with psychiatric conditions characterized by risky choices such as pathological gambling. These observations highlight the importance of understanding how dopamine neuron activity is modulated. While excitatory drive onto dopamine neurons is critical for generating phasic dopamine responses, emerging evidence suggests that inhibitory signaling also modulates these responses. To address the functional importance of inhibitory signaling in dopamine neurons, we generated mice lacking the β3 subunit of the GABAA receptor specifically in dopamine neurons (β3-KO mice) and examined their behavior in tasks that assessed appetitive learning, aversive learning, and risk preference. Dopamine neurons in midbrain slices from β3-KO mice exhibited attenuated GABA-evoked inhibitory post-synaptic currents. Furthermore, electrical stimulation of excitatory afferents to dopamine neurons elicited more dopamine release in the nucleus accumbens of β3-KO mice as measured by fast-scan cyclic voltammetry. β3-KO mice were more active than controls when given morphine, which correlated with potential compensatory upregulation of GABAergic tone onto dopamine neurons. β3-KO mice learned faster in two food-reinforced learning paradigms, but extinguished their learned behavior normally. Enhanced learning was specific for appetitive tasks, as aversive learning was unaffected in β3-KO mice. Finally, we found that β3-KO mice had enhanced risk preference in a probabilistic selection task that required mice to choose between a small certain reward and a larger uncertain reward. Collectively, these findings identify a selective role for GABAA signaling in dopamine neurons in appetitive learning and decision-making. PMID:22114279

  17. Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics

    PubMed Central

    Girshick, Ahna R.; Landy, Michael S.; Simoncelli, Eero P.

    2011-01-01

    Humans are remarkably good at performing visual tasks, but experimental measurements reveal substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. But such inference is optimal only if the observer’s internal model matches the environment. Here, we provide evidence that this is the case. We measured performance in an orientation-estimation task, demonstrating the well-known fact that orientation judgements are more accurate at cardinal (horizontal and vertical) orientations, along with a new observation that judgements made under conditions of uncertainty are strongly biased toward cardinal orientations. We estimate observers’ internal models for orientation and find that they match the local orientation distribution measured in photographs. We also show how a neural population could embed probabilistic information responsible for such biases. PMID:21642976

  18. Probabilistic structural analysis methods for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  19. Probabilistic Composite Design

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Probabilistic composite design is described in terms of a computational simulation. This simulation tracks probabilistically the composite design evolution from constituent materials, fabrication process, through composite mechanics and structural components. Comparisons with experimental data are provided to illustrate selection of probabilistic design allowables, test methods/specimen guidelines, and identification of in situ versus pristine strength, For example, results show that: in situ fiber tensile strength is 90% of its pristine strength; flat-wise long-tapered specimens are most suitable for setting ply tensile strength allowables: a composite radome can be designed with a reliability of 0.999999; and laminate fatigue exhibits wide-spread scatter at 90% cyclic-stress to static-strength ratios.

  20. Functional mechanisms of probabilistic inference in feature- and space-based attentional systems.

    PubMed

    Dombert, Pascasie L; Kuhns, Anna; Mengotti, Paola; Fink, Gereon R; Vossel, Simone

    2016-11-15

    Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. ASSESSING THE ECOLOGICAL CONDITION OF A COASTAL PLAIN WATERSHED USING A PROBABILISTIC SURVEY DESIGN

    EPA Science Inventory

    Using a probabilistic survey design, we assessed the ecological condition of the Florida (USA) portion of the Escambia River watershed using selected environmental and benthic macroinvertebrate data. Macroinvertebrates were sampled at 28 sites during July-August 1996, and 3414 i...

  2. When knowledge activated from memory intrudes on probabilistic inferences from description - the case of stereotypes.

    PubMed

    Dorrough, Angela R; Glöckner, Andreas; Betsch, Tilmann; Wille, Anika

    2017-10-01

    To make decisions in probabilistic inference tasks, individuals integrate relevant information partly in an automatic manner. Thereby, potentially irrelevant stimuli that are additionally presented can intrude on the decision process (e.g., Söllner, Bröder, Glöckner, & Betsch, 2014). We investigate whether such an intrusion effect can also be caused by potentially irrelevant or even misleading knowledge activated from memory. In four studies that combine a standard information board paradigm from decision research with a standard manipulation from social psychology, we investigate the case of stereotypes and demonstrate that stereotype knowledge can yield intrusion biases in probabilistic inferences from description. The magnitude of these biases increases with stereotype accessibility and decreases with a clarification of the rational solution. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  4. Fear of negative evaluation biases social evaluation inference: evidence from a probabilistic learning task.

    PubMed

    Button, Katherine S; Kounali, Daphne; Stapinski, Lexine; Rapee, Ronald M; Lewis, Glyn; Munafò, Marcus R

    2015-01-01

    Fear of negative evaluation (FNE) defines social anxiety yet the process of inferring social evaluation, and its potential role in maintaining social anxiety, is poorly understood. We developed an instrumental learning task to model social evaluation learning, predicting that FNE would specifically bias learning about the self but not others. During six test blocks (3 self-referential, 3 other-referential), participants (n = 100) met six personas and selected a word from a positive/negative pair to finish their social evaluation sentences "I think [you are / George is]…". Feedback contingencies corresponded to 3 rules, liked, neutral and disliked, with P[positive word correct] = 0.8, 0.5 and 0.2, respectively. As FNE increased participants selected fewer positive words (β = -0.4, 95% CI -0.7, -0.2, p = 0.001), which was strongest in the self-referential condition (FNE × condition 0.28, 95% CI 0.01, 0.54, p = 0.04), and the neutral and dislike rules (FNE × condition × rule, p = 0.07). At low FNE the proportion of positive words selected for self-neutral and self-disliked greatly exceeded the feedback contingency, indicating poor learning, which improved as FNE increased. FNE is associated with differences in processing social-evaluative information specifically about the self. At low FNE this manifests as insensitivity to learning negative self-referential evaluation. High FNE individuals are equally sensitive to learning positive or negative evaluation, which although objectively more accurate, may have detrimental effects on mental health.

  5. A solution to the static frame validation challenge problem using Bayesian model selection

    DOE PAGES

    Grigoriu, M. D.; Field, R. V.

    2007-12-23

    Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for themore » material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.« less

  6. A new methodology for automated diagnosis of mild cognitive impairment (MCI) using magnetoencephalography (MEG).

    PubMed

    Amezquita-Sanchez, Juan P; Adeli, Anahita; Adeli, Hojjat

    2016-05-15

    Mild cognitive impairment (MCI) is a cognitive disorder characterized by memory impairment, greater than expected by age. A new methodology is presented to identify MCI patients during a working memory task using MEG signals. The methodology consists of four steps: In step 1, the complete ensemble empirical mode decomposition (CEEMD) is used to decompose the MEG signal into a set of adaptive sub-bands according to its contained frequency information. In step 2, a nonlinear dynamics measure based on permutation entropy (PE) analysis is employed to analyze the sub-bands and detect features to be used for MCI detection. In step 3, an analysis of variation (ANOVA) is used for feature selection. In step 4, the enhanced probabilistic neural network (EPNN) classifier is applied to the selected features to distinguish between MCI and healthy patients. The usefulness and effectiveness of the proposed methodology are validated using the sensed MEG data obtained experimentally from 18 MCI and 19 control patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Probabilistic pathway construction.

    PubMed

    Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha

    2011-07-01

    Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Efficient feature subset selection with probabilistic distance criteria. [pattern recognition

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Recursive expressions are derived for efficiently computing the commonly used probabilistic distance measures as a change in the criteria both when a feature is added to and when a feature is deleted from the current feature subset. A combinatorial algorithm for generating all possible r feature combinations from a given set of s features in (s/r) steps with a change of a single feature at each step is presented. These expressions can also be used for both forward and backward sequential feature selection.

  9. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  10. Speech processing using maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.E.

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  11. The Development of Categorization: Effects of Classification and Inference Training on Category Representation

    PubMed Central

    Deng, Wei (Sophia); Sloutsky, Vladimir M.

    2015-01-01

    Does category representation change in the course of development? And if so, how and why? The current study attempted to answer these questions by examining category learning and category representation. In Experiment 1, 4-year-olds, 6-year-olds, and adults were trained with either a classification task or an inference task and their categorization performance and memory for items were tested. Adults and 6-year-olds exhibited an important asymmetry: they relied on a single deterministic feature during classification training, but not during inference training. In contrast, regardless of the training condition, 4-year-olds relied on multiple probabilistic features. In Experiment 2, 4-year-olds were presented with classification training and their attention was explicitly directed to the deterministic feature. Under this condition, their categorization performance was similar to that of older participants in Experiment 1, yet their memory performance pointed to a similarity-based representation, which was similar to that of 4-year-olds in Experiment 1. These results are discussed in relation to theories of categorization and the role of selective attention in the development of category learning. PMID:25602938

  12. Striatal dopamine release and impaired reinforcement learning in adults with 22q11.2 deletion syndrome.

    PubMed

    van Duin, Esther D A; Kasanova, Zuzana; Hernaus, Dennis; Ceccarini, Jenny; Heinzel, Alexander; Mottaghy, Felix; Mohammadkhani-Shali, Siamak; Winz, Oliver; Frank, Michael; Beck, Merrit C H; Booij, Jan; Myin-Germeys, Inez; van Amelsvoort, Thérèse

    2018-06-01

    22q11.2 deletion syndrome (22q11DS) is a genetic disorder caused by a microdeletion on chromosome 22q11.2 and associated with an increased risk for developing psychosis. The catechol-O-methyltransferase (COMT) gene is located in the deleted region and involved in dopamine (DA) breakdown. Impaired reinforcement learning (RL) is a recurrent feature in psychosis and thought to be related to abnormal striatal DA function. This study aims to examine RL and the potential association with striatal DA-ergic neuromodulation in 22q11DS. Twelve non-psychotic adults with 22q11DS and 16 healthy controls (HC) were included. A dopamine D 2/3 receptor [ 18 F]fallypride positron emission tomography (PET) scan was acquired while participants performed a modified version of the probabilistic stimulus selection task. RL-task performance was significantly worse in 22q11DS compared to HC. There were no group difference in striatal nondisplaceable binding potential (BP ND ) and task-induced DA release. In HC, striatal task-induced DA release was positively associated with task performance, but no such relation was found in 22q11DS subjects. Moreover, higher caudate nucleus task-induced DA release was found in COMT Met hemizygotes relative to Val hemizygotes. This study is the first to show impairments in RL in 22q11DS. It suggests that potentially motivational impairments are not only present in psychosis, but also in this genetic high risk group. These deficits may be underlain by abnormal striatal task-induced DA release, perhaps as a consequence of COMT haplo-insufficiency. Copyright © 2018 Elsevier B.V. and ECNP. All rights reserved.

  13. A Multiatlas Segmentation Using Graph Cuts with Applications to Liver Segmentation in CT Scans

    PubMed Central

    2014-01-01

    An atlas-based segmentation approach is presented that combines low-level operations, an affine probabilistic atlas, and a multiatlas-based segmentation. The proposed combination provides highly accurate segmentation due to registrations and atlas selections based on the regions of interest (ROIs) and coarse segmentations. Our approach shares the following common elements between the probabilistic atlas and multiatlas segmentation: (a) the spatial normalisation and (b) the segmentation method, which is based on minimising a discrete energy function using graph cuts. The method is evaluated for the segmentation of the liver in computed tomography (CT) images. Low-level operations define a ROI around the liver from an abdominal CT. We generate a probabilistic atlas using an affine registration based on geometry moments from manually labelled data. Next, a coarse segmentation of the liver is obtained from the probabilistic atlas with low computational effort. Then, a multiatlas segmentation approach improves the accuracy of the segmentation. Both the atlas selections and the nonrigid registrations of the multiatlas approach use a binary mask defined by coarse segmentation. We experimentally demonstrate that this approach performs better than atlas selections and nonrigid registrations in the entire ROI. The segmentation results are comparable to those obtained by human experts and to other recently published results. PMID:25276219

  14. Integrated Risk-Informed Decision-Making for an ALMR PRISM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhlheim, Michael David; Belles, Randy; Denning, Richard S.

    Decision-making is the process of identifying decision alternatives, assessing those alternatives based on predefined metrics, selecting an alternative (i.e., making a decision), and then implementing that alternative. The generation of decisions requires a structured, coherent process, or a decision-making process. The overall objective for this work is that the generalized framework is adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace ormore » supplement human decision makers with reconfigurable decision-making modules that can perform a given set of tasks rationally, consistently, and reliably. Risk-informed decision-making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The probabilistic portion of the decision-making engine of the supervisory control system is based on the control actions associated with an ALMR PRISM. Newly incorporated into the probabilistic models are the prognostic/diagnostic models developed by Pacific Northwest National Laboratory. These allow decisions to incorporate the health of components into the decision–making process. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic portion of the decision-making engine uses thermal-hydraulic modeling and components for an advanced liquid-metal reactor Power Reactor Inherently Safe Module. The deterministic multi-attribute decision-making framework uses various sensor data (e.g., reactor outlet temperature, steam generator drum level) and calculates its position within the challenge state, its trajectory, and its margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. The metrics that are evaluated are based on reactor trip set points. The integration of the deterministic calculations using multi-physics analyses and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermalhydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies, and developing a user interface to mimic display panels at a modern nuclear power plant.« less

  15. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  16. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  17. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solutionmore » can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.« less

  18. Minimum time search in uncertain dynamic domains with complex sensorial platforms.

    PubMed

    Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel

    2014-08-04

    The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models.

  19. Minimum Time Search in Uncertain Dynamic Domains with Complex Sensorial Platforms

    PubMed Central

    Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel

    2014-01-01

    The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models. PMID:25093345

  20. Impaired insight in cocaine addiction: laboratory evidence and effects on cocaine-seeking behaviour

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moeller, S.J.; Moeller, S.J.; Maloney, T.

    Neuropsychiatric disorders are often characterized by impaired insight into behaviour. Such an insight deficit has been suggested, but never directly tested, in drug addiction. Here we tested for the first time this impaired insight hypothesis in drug addiction, and examined its potential association with drug-seeking behaviour. We also tested potential modulation of these effects by cocaine urine status, an individual difference known to impact underlying cognitive functions and prognosis. Sixteen cocaine addicted individuals testing positive for cocaine in urine, 26 cocaine addicted individuals testing negative for cocaine in urine, and 23 healthy controls completed a probabilistic choice task that assessedmore » objective preference for viewing four types of pictures (pleasant, unpleasant, neutral and cocaine). This choice task concluded by asking subjects to report their most selected picture type; correspondence between subjects self-reports with their objective choice behaviour provided our index of behavioural insight. Results showed that the urine positive cocaine subjects exhibited impaired insight into their own choice behaviour compared with healthy controls; this same study group also selected the most cocaine pictures (and fewest pleasant pictures) for viewing. Importantly, however, it was the urine negative cocaine subjects whose behaviour was most influenced by insight, such that impaired insight in this subgroup only was associated with higher cocaine-related choice on the task and more severe actual cocaine use. These findings suggest that interventions to enhance insight may decrease drug-seeking behaviour, especially in urine negative cocaine subjects, potentially to improve their longer-term clinical outcomes.« less

  1. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 1

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chinyere; Onyebueke, Landon

    1996-01-01

    This program report is the final report covering all the work done on this project. The goal of this project is technology transfer of methodologies to improve design process. The specific objectives are: 1. To learn and understand the Probabilistic design analysis using NESSUS. 2. To assign Design Projects to either undergraduate or graduate students on the application of NESSUS. 3. To integrate the application of NESSUS into some selected senior level courses in Civil and Mechanical Engineering curricula. 4. To develop courseware in Probabilistic Design methodology to be included in a graduate level Design Methodology course. 5. To study the relationship between the Probabilistic design methodology and Axiomatic design methodology.

  2. The Use of the Direct Optimized Probabilistic Calculation Method in Design of Bolt Reinforcement for Underground and Mining Workings

    PubMed Central

    Krejsa, Martin; Janas, Petr; Yilmaz, Işık; Marschalko, Marian; Bouchal, Tomas

    2013-01-01

    The load-carrying system of each construction should fulfill several conditions which represent reliable criteria in the assessment procedure. It is the theory of structural reliability which determines probability of keeping required properties of constructions. Using this theory, it is possible to apply probabilistic computations based on the probability theory and mathematic statistics. Development of those methods has become more and more popular; it is used, in particular, in designs of load-carrying structures with the required level or reliability when at least some input variables in the design are random. The objective of this paper is to indicate the current scope which might be covered by the new method—Direct Optimized Probabilistic Calculation (DOProC) in assessments of reliability of load-carrying structures. DOProC uses a purely numerical approach without any simulation techniques. This provides more accurate solutions to probabilistic tasks, and, in some cases, such approach results in considerably faster completion of computations. DOProC can be used to solve efficiently a number of probabilistic computations. A very good sphere of application for DOProC is the assessment of the bolt reinforcement in the underground and mining workings. For the purposes above, a special software application—“Anchor”—has been developed. PMID:23935412

  3. Location error uncertainties - an advanced using of probabilistic inverse theory

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2016-04-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.

  4. Probabilistic sizing of laminates with uncertainties

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Liaw, D. G.; Chamis, C. C.

    1993-01-01

    A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.

  5. Common Randomness Principles of Secrecy

    ERIC Educational Resources Information Center

    Tyagi, Himanshu

    2013-01-01

    This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…

  6. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  7. Is there a basis for preferring characteristic earthquakes over a Gutenberg–Richter distribution in probabilistic earthquake forecasting?

    USGS Publications Warehouse

    Parsons, Thomas E.; Geist, Eric L.

    2009-01-01

    The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.

  8. Cognitive procedural learning among children and adolescents with or without spastic cerebral palsy: the differential effect of age.

    PubMed

    Gofer-Levi, M; Silberg, T; Brezner, A; Vakil, E

    2014-09-01

    Children learn to engage their surroundings skillfully, adopting implicit knowledge of complex regularities and associations. Probabilistic classification learning (PCL) is a type of cognitive procedural learning in which different cues are probabilistically associated with specific outcomes. Little is known about the effects of developmental disorders on cognitive skill acquisition. Twenty-four children and adolescents with cerebral palsy (CP) were compared to 24 typically developing (TD) youth in their ability to learn probabilistic associations. Performance was examined in relation to general cognitive abilities, level of motor impairment and age. Improvement in PCL was observed for all participants, with no relation to IQ. An age effect was found only among TD children. Learning curves of children with CP on a cognitive procedural learning task differ from those of TD peers and do not appear to be age sensitive. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Attention as Inference: Selection Is Probabilistic; Responses Are All-or-None Samples

    ERIC Educational Resources Information Center

    Vul, Edward; Hanus, Deborah; Kanwisher, Nancy

    2009-01-01

    Theories of probabilistic cognition postulate that internal representations are made up of multiple simultaneously held hypotheses, each with its own probability of being correct (henceforth, "probability distributions"). However, subjects make discrete responses and report the phenomenal contents of their mind to be all-or-none states rather than…

  10. Distinct Roles of Dopamine and Subthalamic Nucleus in Learning and Probabilistic Decision Making

    ERIC Educational Resources Information Center

    Coulthard, Elizabeth J.; Bogacz, Rafal; Javed, Shazia; Mooney, Lucy K.; Murphy, Gillian; Keeley, Sophie; Whone, Alan L.

    2012-01-01

    Even simple behaviour requires us to make decisions based on combining multiple pieces of learned and new information. Making such decisions requires both learning the optimal response to each given stimulus as well as combining probabilistic information from multiple stimuli before selecting a response. Computational theories of decision making…

  11. Reduced activation in the ventral striatum during probabilistic decision-making in patients in an at-risk mental state

    PubMed Central

    Rausch, Franziska; Mier, Daniela; Eifler, Sarah; Fenske, Sabrina; Schirmbeck, Frederike; Englisch, Susanne; Schilling, Claudia; Meyer-Lindenberg, Andreas; Kirsch, Peter; Zink, Mathias

    2015-01-01

    Background Patients with schizophrenia display metacognitive impairments, such as hasty decision-making during probabilistic reasoning — the “jumping to conclusion” bias (JTC). Our recent fMRI study revealed reduced activations in the right ventral striatum (VS) and the ventral tegmental area (VTA) to be associated with decision-making in patients with schizophrenia. It is unclear whether these functional alterations occur in the at-risk mental state (ARMS). Methods We administered the classical beads task and fMRI among ARMS patients and healthy controls matched for age, sex, education and premorbid verbal intelligence. None of the ARMS patients was treated with antipsychotics. Both tasks request probabilistic decisions after a variable amount of stimuli. We evaluated activation during decision-making under certainty versus uncertainty and the process of final decision-making. Results We included 24 AMRS patients and 24 controls in our study. Compared with controls, ARMS patients tended to draw fewer beads and showed significantly more JTC bias in the classical beads task, mirroring findings in patients with schizophrenia. During fMRI, ARMS patients did not demonstrate JTC bias on the behavioural level, but showed a significant hypoactivation in the right VS during the decision stage. Limitations Owing to the cross-sectional design of the study, results are constrained to a better insight into the neurobiology of risk constellations, but not pre-psychotic stages. Nine of the ARMS patients were treated with antidepressants and/or lorazepam. Conclusion As in patients with schizophrenia, a striatal hypoactivation was found in ARMS patients. Confounding effects of antipsychotic medication can be excluded. Our findings indicate that error prediction signalling and reward anticipation may be linked to striatal dysfunction during prodromal stages and should be examined for their utility in predicting transition risk. PMID:25622039

  12. Reduced activation in the ventral striatum during probabilistic decision-making in patients in an at-risk mental state.

    PubMed

    Rausch, Franziska; Mier, Daniela; Eifler, Sarah; Fenske, Sabrina; Schirmbeck, Frederike; Englisch, Susanne; Schilling, Claudia; Meyer-Lindenberg, Andreas; Kirsch, Peter; Zink, Mathias

    2015-05-01

    Patients with schizophrenia display metacognitive impairments, such as hasty decision-making during probabilistic reasoning - the "jumping to conclusion" bias (JTC). Our recent fMRI study revealed reduced activations in the right ventral striatum (VS) and the ventral tegmental area (VTA) to be associated with decision-making in patients with schizophrenia. It is unclear whether these functional alterations occur in the at-risk mental state (ARMS). We administered the classical beads task and fMRI among ARMS patients and healthy controls matched for age, sex, education and premorbid verbal intelligence. None of the ARMS patients was treated with antipsychotics. Both tasks request probabilistic decisions after a variable amount of stimuli. We evaluated activation during decision-making under certainty versus uncertainty and the process of final decision-making. We included 24 AMRS patients and 24 controls in our study. Compared with controls, ARMS patients tended to draw fewer beads and showed significantly more JTC bias in the classical beads task, mirroring findings in patients with schizophrenia. During fMRI, ARMS patients did not demonstrate JTC bias on the behavioural level, but showed a significant hypoactivation in the right VS during the decision stage. Owing to the cross-sectional design of the study, results are constrained to a better insight into the neurobiology of risk constellations, but not prepsychotic stages. Nine of the ARMS patients were treated with antidepressants and/or lorazepam. As in patients with schizophrenia, a striatal hypoactivation was found in ARMS patients. Confounding effects of antipsychotic medication can be excluded. Our findings indicate that error prediction signalling and reward anticipation may be linked to striatal dysfunction during prodromal stages and should be examined for their utility in predicting transition risk.

  13. Composite Load Spectra for Select Space Propulsion Structural Components

    NASA Technical Reports Server (NTRS)

    Ho, Hing W.; Newell, James F.

    1994-01-01

    Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.

  14. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  15. Optimal quantum operations at zero energy cost

    NASA Astrophysics Data System (ADS)

    Chiribella, Giulio; Yang, Yuxiang

    2017-08-01

    Quantum technologies are developing powerful tools to generate and manipulate coherent superpositions of different energy levels. Envisaging a new generation of energy-efficient quantum devices, here we explore how coherence can be manipulated without exchanging energy with the surrounding environment. We start from the task of converting a coherent superposition of energy eigenstates into another. We identify the optimal energy-preserving operations, both in the deterministic and in the probabilistic scenario. We then design a recursive protocol, wherein a branching sequence of energy-preserving filters increases the probability of success while reaching maximum fidelity at each iteration. Building on the recursive protocol, we construct efficient approximations of the optimal fidelity-probability trade-off, by taking coherent superpositions of the different branches generated by probabilistic filtering. The benefits of this construction are illustrated in applications to quantum metrology, quantum cloning, coherent state amplification, and ancilla-driven computation. Finally, we extend our results to transitions where the input state is generally mixed and we apply our findings to the task of purifying quantum coherence.

  16. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    PubMed

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-06-24

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

  17. Asking better questions: How presentation formats influence information search.

    PubMed

    Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D

    2017-08-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Facilitating normative judgments of conditional probability: frequency or nested sets?

    PubMed

    Yamagishi, Kimihiko

    2003-01-01

    Recent probability judgment research contrasts two opposing views. Some theorists have emphasized the role of frequency representations in facilitating probabilistic correctness; opponents have noted that visualizing the probabilistic structure of the task sufficiently facilitates normative reasoning. In the current experiment, the following conditional probability task, an isomorph of the "Problem of Three Prisoners" was tested. "A factory manufactures artificial gemstones. Each gemstone has a 1/3 chance of being blurred, a 1/3 chance of being cracked, and a 1/3 chance of being clear. An inspection machine removes all cracked gemstones, and retains all clear gemstones. However, the machine removes 1/2 of the blurred gemstones. What is the chance that a gemstone is blurred after the inspection?" A 2 x 2 design was administered. The first variable was the use of frequency instruction. The second manipulation was the use of a roulette-wheel diagram that illustrated a "nested-sets" relationship between the prior and the posterior probabilities. Results from two experiments showed that frequency alone had modest effects, while the nested-sets instruction achieved a superior facilitation of normative reasoning. The third experiment compared the roulette-wheel diagram to tree diagrams that also showed the nested-sets relationship. The roulette-wheel diagram outperformed the tree diagrams in facilitation of probabilistic reasoning. Implications for understanding the nature of intuitive probability judgments are discussed.

  19. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning

    PubMed Central

    Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie

    2016-01-01

    Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits. PMID:27851807

  20. ISE-based sensor array system for classification of foodstuffs

    NASA Astrophysics Data System (ADS)

    Ciosek, Patrycja; Sobanski, Tomasz; Augustyniak, Ewa; Wróblewski, Wojciech

    2006-01-01

    A system composed of an array of polymeric membrane ion-selective electrodes and a pattern recognition block—a so-called 'electronic tongue'—was used for the classification of liquid samples: milk, fruit juice and tonic. The task of this system was to automatically recognize a brand of the product. To analyze the measurement set-up responses various non-parametric classifiers such as k-nearest neighbours, a feedforward neural network and a probabilistic neural network were used. In order to enhance the classification ability of the system, standard model solutions of salts were measured (in order to take into account any variation in time of the working parameters of the sensors). This system was capable of recognizing the brand of the products with accuracy ranging from 68% to 100% (in the case of the best classifier).

  1. Emergence of β-Band Oscillations in the Aged Rat Amygdala during Discrimination Learning and Decision Making Tasks

    PubMed Central

    Samson, Rachel D.; Duarte, Leroy; Venkatesh, Anu

    2017-01-01

    Abstract Older adults tend to use strategies that differ from those used by young adults to solve decision-making tasks. MRI experiments suggest that altered strategy use during aging can be accompanied by a change in extent of activation of a given brain region, inter-hemispheric bilateralization or added brain structures. It has been suggested that these changes reflect compensation for less effective networks to enable optimal performance. One way that communication can be influenced within and between brain networks is through oscillatory events that help structure and synchronize incoming and outgoing information. It is unknown how aging impacts local oscillatory activity within the basolateral complex of the amygdala (BLA). The present study recorded local field potentials (LFPs) and single units in old and young rats during the performance of tasks that involve discrimination learning and probabilistic decision making. We found task- and age-specific increases in power selectively within the β range (15–30 Hz). The increased β power occurred after lever presses, as old animals reached the goal location. Periods of high-power β developed over training days in the aged rats, and was greatest in early trials of a session. β Power was also greater after pressing for the large reward option. These data suggest that aging of BLA networks results in strengthened synchrony of β oscillations when older animals are learning or deciding between rewards of different size. Whether this increased synchrony reflects the neural basis of a compensatory strategy change of old animals in reward-based decision-making tasks, remains to be verified. PMID:29034315

  2. Probabilistic double guarantee kidnapping detection in SLAM.

    PubMed

    Tian, Yang; Ma, Shugen

    2016-01-01

    For determining whether kidnapping has happened and which type of kidnapping it is while a robot performs autonomous tasks in an unknown environment, a double guarantee kidnapping detection (DGKD) method has been proposed. The good performance of DGKD in a relative small environment is shown. However, a limitation of DGKD is found in a large-scale environment by our recent work. In order to increase the adaptability of DGKD in a large-scale environment, an improved method called probabilistic double guarantee kidnapping detection is proposed in this paper to combine probability of features' positions and the robot's posture. Simulation results demonstrate the validity and accuracy of the proposed method.

  3. Ongoing behavioral state information signaled in the lateral habenula guides choice flexibility in freely moving rats

    PubMed Central

    Baker, Phillip M.; Oh, Sujean E.; Kidder, Kevan S.; Mizumori, Sheri J. Y.

    2015-01-01

    The lateral habenula (LHb) plays a role in a wide variety of behaviors ranging from maternal care, to sleep, to various forms of cognition. One prominent theory with ample supporting evidence is that the LHb serves to relay basal ganglia and limbic signals about negative outcomes to midbrain monoaminergic systems. This makes it likely that the LHb is critically involved in behavioral flexibility as all of these systems have been shown to contribute when flexible behavior is required. Behavioral flexibility is commonly examined across species and is impaired in various neuropsychiatric conditions including autism, depression, addiction, and schizophrenia; conditions in which the LHb is thought to play a role. Therefore, a thorough examination of the role of the LHb in behavioral flexibility serves multiple functions including understanding possible connections with neuropsychiatric illnesses and additional insight into its role in cognition in general. Here, we assess the LHb’s role in behavioral flexibility through comparisons of the roles its afferent and efferent pathways are known to play. Additionally, we provide new evidence supporting the LHb contributions to behavioral flexibility through organization of specific goal directed actions under cognitively demanding conditions. Specifically, in the first experiment, a majority of neurons recorded from the LHb were found to correlate with velocity on a spatial navigation task and did not change significantly when reward outcomes were manipulated. Additionally, measurements of local field potential (LFP) in the theta band revealed significant changes in power relative to velocity and reward location. In a second set of experiments, inactivation of the LHb with the gamma-aminobutyric acid (GABA) agonists baclofen and muscimol led to an impairment in a spatial/response based repeated probabilistic reversal learning task. Control experiments revealed that this impairment was likely due to the demands of repeated switching behaviors as rats were unimpaired on initial discrimination acquisition or retention of probabilistic learning. Taken together, these novel findings compliment other work discussed supporting a role for the LHb in action selection when cognitive or emotional demands are increased. Finally, we discuss future mechanisms by which a superior understanding of the LHb can be obtained through additional examination of behavioral flexibility tasks. PMID:26582981

  4. Intuitive Interference in Probabilistic Reasoning

    ERIC Educational Resources Information Center

    Babai, Reuven; Brecher, Tali; Stavy, Ruth; Tirosh, Dina

    2006-01-01

    One theoretical framework which addresses students' conceptions and reasoning processes in mathematics and science education is the intuitive rules theory. According to this theory, students' reasoning is affected by intuitive rules when they solve a wide variety of conceptually non-related mathematical and scientific tasks that share some common…

  5. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    ERIC Educational Resources Information Center

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  6. Asking Better Questions: How Presentation Formats Influence Information Search

    ERIC Educational Resources Information Center

    Wu, Charley M.; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D.

    2017-01-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search…

  7. Fear of Negative Evaluation Biases Social Evaluation Inference: Evidence from a Probabilistic Learning Task

    PubMed Central

    Button, Katherine S.; Kounali, Daphne; Stapinski, Lexine; Rapee, Ronald M.; Lewis, Glyn; Munafò, Marcus R.

    2015-01-01

    Background Fear of negative evaluation (FNE) defines social anxiety yet the process of inferring social evaluation, and its potential role in maintaining social anxiety, is poorly understood. We developed an instrumental learning task to model social evaluation learning, predicting that FNE would specifically bias learning about the self but not others. Methods During six test blocks (3 self-referential, 3 other-referential), participants (n = 100) met six personas and selected a word from a positive/negative pair to finish their social evaluation sentences “I think [you are / George is]…”. Feedback contingencies corresponded to 3 rules, liked, neutral and disliked, with P[positive word correct] = 0.8, 0.5 and 0.2, respectively. Results As FNE increased participants selected fewer positive words (β = −0.4, 95% CI −0.7, −0.2, p = 0.001), which was strongest in the self-referential condition (FNE × condition 0.28, 95% CI 0.01, 0.54, p = 0.04), and the neutral and dislike rules (FNE × condition × rule, p = 0.07). At low FNE the proportion of positive words selected for self-neutral and self-disliked greatly exceeded the feedback contingency, indicating poor learning, which improved as FNE increased. Conclusions FNE is associated with differences in processing social-evaluative information specifically about the self. At low FNE this manifests as insensitivity to learning negative self-referential evaluation. High FNE individuals are equally sensitive to learning positive or negative evaluation, which although objectively more accurate, may have detrimental effects on mental health. PMID:25853835

  8. A randomised approach for NARX model identification based on a multivariate Bernoulli distribution

    NASA Astrophysics Data System (ADS)

    Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.

    2017-04-01

    The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.

  9. The Emergence of Selective Attention through Probabilistic Associations between Stimuli and Actions.

    PubMed

    Simione, Luca; Nolfi, Stefano

    2016-01-01

    In this paper we show how a multilayer neural network trained to master a context-dependent task in which the action co-varies with a certain stimulus in a first context and with a second stimulus in an alternative context exhibits selective attention, i.e. filtering out of irrelevant information. This effect is rather robust and it is observed in several variations of the experiment in which the characteristics of the network as well as of the training procedure have been varied. Our result demonstrates how the filtering out of irrelevant information can originate spontaneously as a consequence of the regularities present in context-dependent training set and therefore does not necessarily depend on specific architectural constraints. The post-evaluation of the network in an instructed-delay experimental scenario shows how the behaviour of the network is consistent with the data collected in neuropsychological studies. The analysis of the network at the end of the training process indicates how selective attention originates as a result of the effects caused by relevant and irrelevant stimuli mediated by context-dependent and context-independent bidirectional associations between stimuli and actions that are extracted by the network during the learning.

  10. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  11. Self-reported strategies in decisions under risk: role of feedback, reasoning abilities, executive functions, short-term-memory, and working memory.

    PubMed

    Schiebener, Johannes; Brand, Matthias

    2015-11-01

    In decisions under objective risk conditions information about the decision options' possible outcomes and the rules for outcomes' occurrence are provided. Thus, deciders can base decision-making strategies on probabilistic laws. In many laboratory decision-making tasks, choosing the option with the highest winning probability in all trials (=maximization strategy) is probabilistically regarded the most rational behavior. However, individuals often behave less optimal, especially in case the individuals have lower cognitive functions or in case no feedback about consequences is provided in the situation. It is still unclear which cognitive functions particularly predispose individuals for using successful strategies and which strategies profit from feedback. We investigated 195 individuals with two decision-making paradigms, the Game of Dice Task (GDT) (with and without feedback), and the Card Guessing Game. Thereafter, participants reported which strategies they had applied. Interaction effects (feedback × strategy), effect sizes, and uncorrected single group comparisons suggest that feedback in the GDT tended to be more beneficial to individuals reporting exploratory strategies (e.g., use intuition). In both tasks, the self-reported use of more principled and more rational strategies was accompanied by better decision-making performance and better performances in reasoning and executive functioning tasks. The strategy groups did not significantly differ in most short-term and working-memory tasks. Thus, particularly individual differences in reasoning and executive functions seem to predispose individuals toward particular decision-making strategies. Feedback seems to be useful for individuals who rather explore the decision-making situation instead of following a certain plan.

  12. Optimization of Contrast Detection Power with Probabilistic Behavioral Information

    PubMed Central

    Cordes, Dietmar; Herzmann, Grit; Nandy, Rajesh; Curran, Tim

    2012-01-01

    Recent progress in the experimental design for event-related fMRI experiments made it possible to find the optimal stimulus sequence for maximum contrast detection power using a genetic algorithm. In this study, a novel algorithm is proposed for optimization of contrast detection power by including probabilistic behavioral information, based on pilot data, in the genetic algorithm. As a particular application, a recognition memory task is studied and the design matrix optimized for contrasts involving the familiarity of individual items (pictures of objects) and the recollection of qualitative information associated with the items (left/right orientation). Optimization of contrast efficiency is a complicated issue whenever subjects’ responses are not deterministic but probabilistic. Contrast efficiencies are not predictable unless behavioral responses are included in the design optimization. However, available software for design optimization does not include options for probabilistic behavioral constraints. If the anticipated behavioral responses are included in the optimization algorithm, the design is optimal for the assumed behavioral responses, and the resulting contrast efficiency is greater than what either a block design or a random design can achieve. Furthermore, improvements of contrast detection power depend strongly on the behavioral probabilities, the perceived randomness, and the contrast of interest. The present genetic algorithm can be applied to any case in which fMRI contrasts are dependent on probabilistic responses that can be estimated from pilot data. PMID:22326984

  13. a Probabilistic Embedding Clustering Method for Urban Structure Detection

    NASA Astrophysics Data System (ADS)

    Lin, X.; Li, H.; Zhang, Y.; Gao, L.; Zhao, L.; Deng, M.

    2017-09-01

    Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM) to find latent features from high dimensional urban sensing data by "learning" via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China) proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.

  14. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    PubMed

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.

  15. Jumping to conclusions and persecutory delusions.

    PubMed

    Startup, Helen; Freeman, Daniel; Garety, Philippa A

    2008-09-01

    It is unknown whether a 'jumping to conclusions' (JTC) data-gathering bias is apparent in specific delusion sub-types. A group with persecutory delusions is compared with a sample of non-clinical controls on a probabilistic reasoning task. Results suggest JTC is apparent in individuals with the persecutory sub-type of delusions.

  16. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  17. Cognitive flexibility impairment and reduced frontal cortex BDNF expression in the ouabain model of mania

    PubMed Central

    Amodeo, Dionisio A.; Grospe, Gena; Zang, Hui; Dwivedi, Yogesh; Ragozzino, Michael E.

    2016-01-01

    Central infusion of the Na+/K+-ATPase inhibitor, ouabain in rats serves as an animal model of mania because it leads to hyperactivity, as well as reproduces ion dysregulation and reduced BDNF levels similar to that observed in bipolar disorder. Bipolar disorder is also associated with cognitive inflexibility and working memory deficits. It is unknown whether ouabain treatment in rats leads to similar cognitive flexibility and working memory deficits. The present study examined the effects of an intracerebral ventricular infusion of ouabain in rats on spontaneous alternation, probabilistic reversal learning and BDNF expression levels in the frontal cortex. Ouabain treatment significantly increased locomotor activity, but did not affect alternation performance in a Y-maze. Ouabain treatment selectively impaired reversal learning in a spatial discrimination task using an 80/20 probabilistic reinforcement procedure. The reversal learning deficit in ouabain-treated rats resulted from an impaired ability to maintain a new choice pattern (increased regressive errors). Ouabain treatment also decreased sensitivity to negative feedback during the initial phase of reversal learning. Expression of BDNF mRNA and protein levels was downregulated in the frontal cortex which also negatively correlated with regressive errors. These findings suggest that the ouabain model of mania may be useful in understanding the neuropathophysiology that contributes to cognitive flexibility deficits and test potential treatments to alleviate cognitive deficits in bipolar disorder. PMID:27267245

  18. Delay or probability discounting in a model of impulsive behavior: effect of alcohol.

    PubMed Central

    Richards, J B; Zhang, L; Mitchell, S H; de Wit, H

    1999-01-01

    Little is known about the acute effects of drugs of abuse on impulsivity and self-control. In this study, impulsivity was assessed in humans using a computer task that measured delay and probability discounting. Discounting describes how much the value of a reward (or punisher) is decreased when its occurrence is either delayed or uncertain. Twenty-four healthy adult volunteers ingested a moderate dose of ethanol (0.5 or 0.8 g/kg ethanol: n = 12 at each dose) or placebo before completing the discounting task. In the task the participants were given a series of choices between a small, immediate, certain amount of money and $10 that was either delayed (0, 2, 30, 180, or 365 days) or probabilistic (i.e., certainty of receipt was 1.0, .9, .75, .5, or .25). The point at which each individual was indifferent between the smaller immediate or certain reward and the $10 delayed or probabilistic reward was identified using an adjusting-amount procedure. The results indicated that (a) delay and probability discounting were well described by a hyperbolic function; (b) delay and probability discounting were positively correlated within subjects; (c) delay and probability discounting were moderately correlated with personality measures of impulsivity; and (d) alcohol had no effect on discounting. PMID:10220927

  19. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  20. Attentional load and implicit sequence learning.

    PubMed

    Shanks, David R; Rowland, Lee A; Ranger, Mandeep S

    2005-06-01

    A widely employed conceptualization of implicit learning hypothesizes that it makes minimal demands on attentional resources. This conjecture was investigated by comparing learning under single-task and dual-task conditions in the sequential reaction time (SRT) task. Participants learned probabilistic sequences, with dual-task participants additionally having to perform a counting task using stimuli that were targets in the SRT display. Both groups were then tested for sequence knowledge under single-task (Experiments 1 and 2) or dual-task (Experiment 3) conditions. Participants also completed a free generation task (Experiments 2 and 3) under inclusion or exclusion conditions to determine if sequence knowledge was conscious or unconscious in terms of its access to intentional control. The experiments revealed that the secondary task impaired sequence learning and that sequence knowledge was consciously accessible. These findings disconfirm both the notion that implicit learning is able to proceed normally under conditions of divided attention, and that the acquired knowledge is inaccessible to consciousness. A unitary framework for conceptualizing implicit and explicit learning is proposed.

  1. Task-related functional connectivity of the caudate mediates the association between trait mindfulness and implicit learning in older adults.

    PubMed

    Stillman, Chelsea M; You, Xiaozhen; Seaman, Kendra L; Vaidya, Chandan J; Howard, James H; Howard, Darlene V

    2016-08-01

    Accumulating evidence shows a positive relationship between mindfulness and explicit cognitive functioning, i.e., that which occurs with conscious intent and awareness. However, recent evidence suggests that there may be a negative relationship between mindfulness and implicit types of learning, or those that occur without conscious awareness or intent. Here we examined the neural mechanisms underlying the recently reported negative relationship between dispositional mindfulness and implicit probabilistic sequence learning in both younger and older adults. We tested the hypothesis that the relationship is mediated by communication, or functional connectivity, of brain regions once traditionally considered to be central to dissociable learning systems: the caudate, medial temporal lobe (MTL), and prefrontal cortex (PFC). We first replicated the negative relationship between mindfulness and implicit learning in a sample of healthy older adults (60-90 years old) who completed three event-related runs of an implicit sequence learning task. Then, using a seed-based connectivity approach, we identified task-related connectivity associated with individual differences in both learning and mindfulness. The main finding was that caudate-MTL connectivity (bilaterally) was positively correlated with learning and negatively correlated with mindfulness. Further, the strength of task-related connectivity between these regions mediated the negative relationship between mindfulness and learning. This pattern of results was limited to the older adults. Thus, at least in healthy older adults, the functional communication between two interactive learning-relevant systems can account for the relationship between mindfulness and implicit probabilistic sequence learning.

  2. Iowa Gambling Task (IGT): twenty years after – gambling disorder and IGT

    PubMed Central

    Brevers, Damien; Bechara, Antoine; Cleeremans, Axel; Noël, Xavier

    2013-01-01

    The Iowa Gambling Task (IGT) involves probabilistic learning via monetary rewards and punishments, where advantageous task performance requires subjects to forego potential large immediate rewards for small longer-term rewards to avoid larger losses. Pathological gamblers (PG) perform worse on the IGT compared to controls, relating to their persistent preference toward high, immediate, and uncertain rewards despite experiencing larger losses. In this contribution, we review studies that investigated processes associated with poor IGT performance in PG. Findings from these studies seem to fit with recent neurocognitive models of addiction, which argue that the diminished ability of addicted individuals to ponder short-term against long-term consequences of a choice may be the product of an hyperactive automatic attentional and memory system for signaling the presence of addiction-related cues (e.g., high uncertain rewards associated with disadvantageous decks selection during the IGT) and for attributing to such cues pleasure and excitement. This incentive-salience associated with gambling-related choice in PG may be so high that it could literally “hijack” resources [“hot” executive functions (EFs)] involved in emotional self-regulation and necessary to allow the enactment of further elaborate decontextualized problem-solving abilities (“cool” EFs). A framework for future research is also proposed, which highlights the need for studies examining how these processes contribute specifically to the aberrant choice profile displayed by PG on the IGT. PMID:24137138

  3. Building a high-resolution T2-weighted MR-based probabilistic model of tumor occurrence in the prostate.

    PubMed

    Nagarajan, Mahesh B; Raman, Steven S; Lo, Pechin; Lin, Wei-Chan; Khoshnoodi, Pooria; Sayre, James W; Ramakrishna, Bharath; Ahuja, Preeti; Huang, Jiaoti; Margolis, Daniel J A; Lu, David S K; Reiter, Robert E; Goldin, Jonathan G; Brown, Matthew S; Enzmann, Dieter R

    2018-02-19

    We present a method for generating a T2 MR-based probabilistic model of tumor occurrence in the prostate to guide the selection of anatomical sites for targeted biopsies and serve as a diagnostic tool to aid radiological evaluation of prostate cancer. In our study, the prostate and any radiological findings within were segmented retrospectively on 3D T2-weighted MR images of 266 subjects who underwent radical prostatectomy. Subsequent histopathological analysis determined both the ground truth and the Gleason grade of the tumors. A randomly chosen subset of 19 subjects was used to generate a multi-subject-derived prostate template. Subsequently, a cascading registration algorithm involving both affine and non-rigid B-spline transforms was used to register the prostate of every subject to the template. Corresponding transformation of radiological findings yielded a population-based probabilistic model of tumor occurrence. The quality of our probabilistic model building approach was statistically evaluated by measuring the proportion of correct placements of tumors in the prostate template, i.e., the number of tumors that maintained their anatomical location within the prostate after their transformation into the prostate template space. Probabilistic model built with tumors deemed clinically significant demonstrated a heterogeneous distribution of tumors, with higher likelihood of tumor occurrence at the mid-gland anterior transition zone and the base-to-mid-gland posterior peripheral zones. Of 250 MR lesions analyzed, 248 maintained their original anatomical location with respect to the prostate zones after transformation to the prostate. We present a robust method for generating a probabilistic model of tumor occurrence in the prostate that could aid clinical decision making, such as selection of anatomical sites for MR-guided prostate biopsies.

  4. The development of categorization: effects of classification and inference training on category representation.

    PubMed

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2015-03-01

    Does category representation change in the course of development? And if so, how and why? The current study attempted to answer these questions by examining category learning and category representation. In Experiment 1, 4-year-olds, 6-year-olds, and adults were trained with either a classification task or an inference task and their categorization performance and memory for items were tested. Adults and 6-year-olds exhibited an important asymmetry: they relied on a single deterministic feature during classification training, but not during inference training. In contrast, regardless of the training condition, 4-year-olds relied on multiple probabilistic features. In Experiment 2, 4-year-olds were presented with classification training and their attention was explicitly directed to the deterministic feature. Under this condition, their categorization performance was similar to that of older participants in Experiment 1, yet their memory performance pointed to a similarity-based representation, which was similar to that of 4-year-olds in Experiment 1. These results are discussed in relation to theories of categorization and the role of selective attention in the development of category learning. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  5. 78 FR 15746 - Compendium of Analyses To Investigate Select Level 1 Probabilistic Risk Assessment End-State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... NUCLEAR REGULATORY COMMISSION [NRC-2013-0047] Compendium of Analyses To Investigate Select Level 1...) has issued for public comment a document entitled: Compendium of Analyses to Investigate Select Level... begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search...

  6. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  7. Probabilistic analysis of the torsional effects on the tall building resistance due to earthquake even

    NASA Astrophysics Data System (ADS)

    Králik, Juraj; Králik, Juraj

    2017-07-01

    The paper presents the results from the deterministic and probabilistic analysis of the accidental torsional effect of reinforced concrete tall buildings due to earthquake even. The core-column structural system was considered with various configurations in plane. The methodology of the seismic analysis of the building structures in Eurocode 8 and JCSS 2000 is discussed. The possibilities of the utilization the LHS method to analyze the extensive and robust tasks in FEM is presented. The influence of the various input parameters (material, geometry, soil, masses and others) is considered. The deterministic and probability analysis of the seismic resistance of the structure was calculated in the ANSYS program.

  8. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller.

    PubMed

    Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin

    2014-06-01

    Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.

  9. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller

    NASA Astrophysics Data System (ADS)

    Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin

    2014-06-01

    Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.

  10. Dopamine neurons learn relative chosen value from probabilistic rewards

    PubMed Central

    Lak, Armin; Stauffer, William R; Schultz, Wolfram

    2016-01-01

    Economic theories posit reward probability as one of the factors defining reward value. Individuals learn the value of cues that predict probabilistic rewards from experienced reward frequencies. Building on the notion that responses of dopamine neurons increase with reward probability and expected value, we asked how dopamine neurons in monkeys acquire this value signal that may represent an economic decision variable. We found in a Pavlovian learning task that reward probability-dependent value signals arose from experienced reward frequencies. We then assessed neuronal response acquisition during choices among probabilistic rewards. Here, dopamine responses became sensitive to the value of both chosen and unchosen options. Both experiments showed also the novelty responses of dopamine neurones that decreased as learning advanced. These results show that dopamine neurons acquire predictive value signals from the frequency of experienced rewards. This flexible and fast signal reflects a specific decision variable and could update neuronal decision mechanisms. DOI: http://dx.doi.org/10.7554/eLife.18044.001 PMID:27787196

  11. Frontostriatal white matter integrity mediates adult age differences in probabilistic reward learning.

    PubMed

    Samanez-Larkin, Gregory R; Levens, Sara M; Perry, Lee M; Dougherty, Robert F; Knutson, Brian

    2012-04-11

    Frontostriatal circuits have been implicated in reward learning, and emerging findings suggest that frontal white matter structural integrity and probabilistic reward learning are reduced in older age. This cross-sectional study examined whether age differences in frontostriatal white matter integrity could account for age differences in reward learning in a community life span sample of human adults. By combining diffusion tensor imaging with a probabilistic reward learning task, we found that older age was associated with decreased reward learning and decreased white matter integrity in specific pathways running from the thalamus to the medial prefrontal cortex and from the medial prefrontal cortex to the ventral striatum. Further, white matter integrity in these thalamocorticostriatal paths could statistically account for age differences in learning. These findings suggest that the integrity of frontostriatal white matter pathways critically supports reward learning. The findings also raise the possibility that interventions that bolster frontostriatal integrity might improve reward learning and decision making.

  12. A simulation-based probabilistic design method for arctic sea transport systems

    NASA Astrophysics Data System (ADS)

    Martin, Bergström; Ove, Erikstad Stein; Sören, Ehlers

    2016-12-01

    When designing an arctic cargo ship, it is necessary to consider multiple stochastic factors. This paper evaluates the merits of a simulation-based probabilistic design method specifically developed to deal with this challenge. The outcome of the paper indicates that the incorporation of simulations and probabilistic design parameters into the design process enables more informed design decisions. For instance, it enables the assessment of the stochastic transport capacity of an arctic ship, as well as of its long-term ice exposure that can be used to determine an appropriate level of ice-strengthening. The outcome of the paper also indicates that significant gains in transport system cost-efficiency can be obtained by extending the boundaries of the design task beyond the individual vessel. In the case of industrial shipping, this allows for instance the consideration of port-based cargo storage facilities allowing for temporary shortages in transport capacity and thus a reduction in the required fleet size / ship capacity.

  13. Perceptual learning as improved probabilistic inference in early sensory areas.

    PubMed

    Bejjanki, Vikranth R; Beck, Jeffrey M; Lu, Zhong-Lin; Pouget, Alexandre

    2011-05-01

    Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.

  14. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less computation power. The authors have used this approach for risk assessment towards identification of effectiveness-profitability of risk mitigation measures, using optimization model for resource allocation. Based on the error-computation trade-off, 62-earthquake scenarios are chosen to be used for this purpose.

  15. SIMRAND I- SIMULATION OF RESEARCH AND DEVELOPMENT PROJECTS

    NASA Technical Reports Server (NTRS)

    Miles, R. F.

    1994-01-01

    The Simulation of Research and Development Projects program (SIMRAND) aids in the optimal allocation of R&D resources needed to achieve project goals. SIMRAND models the system subsets or project tasks as various network paths to a final goal. Each path is described in terms of task variables such as cost per hour, cost per unit, availability of resources, etc. Uncertainty is incorporated by treating task variables as probabilistic random variables. SIMRAND calculates the measure of preference for each alternative network. The networks yielding the highest utility function (or certainty equivalence) are then ranked as the optimal network paths. SIMRAND has been used in several economic potential studies at NASA's Jet Propulsion Laboratory involving solar dish power systems and photovoltaic array construction. However, any project having tasks which can be reduced to equations and related by measures of preference can be modeled. SIMRAND analysis consists of three phases: reduction, simulation, and evaluation. In the reduction phase, analytical techniques from probability theory and simulation techniques are used to reduce the complexity of the alternative networks. In the simulation phase, a Monte Carlo simulation is used to derive statistics on the variables of interest for each alternative network path. In the evaluation phase, the simulation statistics are compared and the networks are ranked in preference by a selected decision rule. The user must supply project subsystems in terms of equations based on variables (for example, parallel and series assembly line tasks in terms of number of items, cost factors, time limits, etc). The associated cumulative distribution functions and utility functions for each variable must also be provided (allowable upper and lower limits, group decision factors, etc). SIMRAND is written in Microsoft FORTRAN 77 for batch execution and has been implemented on an IBM PC series computer operating under DOS.

  16. Better Learning with More Error: Probabilistic Feedback Increases Sensitivity to Correlated Cues in Categorization

    ERIC Educational Resources Information Center

    Little, Daniel R.; Lewandowsky, Stephan

    2009-01-01

    Despite the fact that categories are often composed of correlated features, the evidence that people detect and use these correlations during intentional category learning has been overwhelmingly negative to date. Nonetheless, on other categorization tasks, such as feature prediction, people show evidence of correlational sensitivity. A…

  17. Sonification and Visualization of Predecisional Information Search: Identifying Toolboxes in Children

    ERIC Educational Resources Information Center

    Betsch, Tilmann; Wünsche, Kirsten; Großkopf, Armin; Schröder, Klara; Stenmans, Rachel

    2018-01-01

    Prior evidence has suggested that preschoolers and elementary schoolers search information largely with no systematic plan when making decisions in probabilistic environments. However, this finding might be due to the insensitivity of standard classification methods that assume a lack of variance in decision strategies for tasks of the same kind.…

  18. Assessing Logo Programming among Jordanian Seventh Grade Students through Turtle Geometry

    ERIC Educational Resources Information Center

    Khasawneh, Amal A.

    2009-01-01

    The present study is concerned with assessing Logo programming experiences among seventh grade students. A formal multiple-choice test and five performance tasks were used to collect data. The results provided that students' performance was better than the expected score by the probabilistic laws, and a very low correlation between their Logo…

  19. Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects

    ERIC Educational Resources Information Center

    Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor

    2011-01-01

    Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…

  20. Spatial Working Memory Interferes with Explicit, but Not Probabilistic Cuing of Spatial Attention

    ERIC Educational Resources Information Center

    Won, Bo-Yeong; Jiang, Yuhong V.

    2015-01-01

    Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal…

  1. Developmental and Gender Related Differences in Response Switches after Nonrepresentative Negative Feedback

    ERIC Educational Resources Information Center

    Jansen, Brenda R. J.; van Duijvenvoorde, Anna C. K.; Huizenga, Hilde M.

    2014-01-01

    In many decision making tasks negative feedback is probabilistic and, as a consequence, may be given when the decision is actually correct. This feedback can be referred to as nonrepresentative negative feedback. In the current study, we investigated developmental and gender related differences in such switching after nonrepresentative negative…

  2. Automated Test Case Generator for Phishing Prevention Using Generative Grammars and Discriminative Methods

    ERIC Educational Resources Information Center

    Palka, Sean

    2015-01-01

    This research details a methodology designed for creating content in support of various phishing prevention tasks including live exercises and detection algorithm research. Our system uses probabilistic context-free grammars (PCFG) and variable interpolation as part of a multi-pass method to create diverse and consistent phishing email content on…

  3. Probabilistic self-localisation on a qualitative map based on occlusions

    NASA Astrophysics Data System (ADS)

    Santos, Paulo E.; Martins, Murilo F.; Fenelon, Valquiria; Cozman, Fabio G.; Dee, Hannah M.

    2016-09-01

    Spatial knowledge plays an essential role in human reasoning, permitting tasks such as locating objects in the world (including oneself), reasoning about everyday actions and describing perceptual information. This is also the case in the field of mobile robotics, where one of the most basic (and essential) tasks is the autonomous determination of the pose of a robot with respect to a map, given its perception of the environment. This is the problem of robot self-localisation (or simply the localisation problem). This paper presents a probabilistic algorithm for robot self-localisation that is based on a topological map constructed from the observation of spatial occlusion. Distinct locations on the map are defined by means of a classical formalism for qualitative spatial reasoning, whose base definitions are closer to the human categorisation of space than traditional, numerical, localisation procedures. The approach herein proposed was systematically evaluated through experiments using a mobile robot equipped with a RGB-D sensor. The results obtained show that the localisation algorithm is successful in locating the robot in qualitatively distinct regions.

  4. Impaired risk evaluation in people with Internet gaming disorder: fMRI evidence from a probability discounting task.

    PubMed

    Lin, Xiao; Zhou, Hongli; Dong, Guangheng; Du, Xiaoxia

    2015-01-02

    This study examined how Internet gaming disorder (IGD) subjects modulating reward and risk at a neural level under a probability-discounting task with functional magnetic resonance imaging (fMRI). Behavioral and imaging data were collected from 19 IGD subjects (22.2 ± 3.08 years) and 21 healthy controls (HC, 22.8 ± 3.5 years). Behavior results showed that IGD subjects prefer the probabilistic options to fixed ones and were associated with shorter reaction time, when comparing to HC. The fMRI results revealed that IGD subjects show decreased activation in the inferior frontal gyrus and the precentral gyrus when choosing the probabilistic options than HC. Correlations were also calculated between behavioral performances and brain activities in relevant brain regions. Both of the behavioral performance and fMRI results indicate that people with IGD show impaired risk evaluation, which might be the reason why IGD subjects continue playing online games despite the risks of widely known negative consequence. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Constrained dictionary learning and probabilistic hypergraph ranking for person re-identification

    NASA Astrophysics Data System (ADS)

    He, You; Wu, Song; Pu, Nan; Qian, Li; Xiao, Guoqiang

    2018-04-01

    Person re-identification is a fundamental and inevitable task in public security. In this paper, we propose a novel framework to improve the performance of this task. First, two different types of descriptors are extracted to represent a pedestrian: (1) appearance-based superpixel features, which are constituted mainly by conventional color features and extracted from the supepixel rather than a whole picture and (2) due to the limitation of discrimination of appearance features, the deep features extracted by feature fusion Network are also used. Second, a view invariant subspace is learned by dictionary learning constrained by the minimum negative sample (termed as DL-cMN) to reduce the noise in appearance-based superpixel feature domain. Then, we use deep features and sparse codes transformed by appearancebased features to establish the hyperedges respectively by k-nearest neighbor, rather than jointing different features simply. Finally, a final ranking is performed by probabilistic hypergraph ranking algorithm. Extensive experiments on three challenging datasets (VIPeR, PRID450S and CUHK01) demonstrate the advantages and effectiveness of our proposed algorithm.

  6. Dopamine D3 Receptor Availability Is Associated with Inflexible Decision Making.

    PubMed

    Groman, Stephanie M; Smith, Nathaniel J; Petrullli, J Ryan; Massi, Bart; Chen, Lihui; Ropchan, Jim; Huang, Yiyun; Lee, Daeyeol; Morris, Evan D; Taylor, Jane R

    2016-06-22

    Dopamine D2/3 receptor signaling is critical for flexible adaptive behavior; however, it is unclear whether D2, D3, or both receptor subtypes modulate precise signals of feedback and reward history that underlie optimal decision making. Here, PET with the radioligand [(11)C]-(+)-PHNO was used to quantify individual differences in putative D3 receptor availability in rodents trained on a novel three-choice spatial acquisition and reversal-learning task with probabilistic reinforcement. Binding of [(11)C]-(+)-PHNO in the midbrain was negatively related to the ability of rats to adapt to changes in rewarded locations, but not to the initial learning. Computational modeling of choice behavior in the reversal phase indicated that [(11)C]-(+)-PHNO binding in the midbrain was related to the learning rate and sensitivity to positive, but not negative, feedback. Administration of a D3-preferring agonist likewise impaired reversal performance by reducing the learning rate and sensitivity to positive feedback. These results demonstrate a previously unrecognized role for D3 receptors in select aspects of reinforcement learning and suggest that individual variation in midbrain D3 receptors influences flexible behavior. Our combined neuroimaging, behavioral, pharmacological, and computational approach implicates the dopamine D3 receptor in decision-making processes that are altered in psychiatric disorders. Flexible decision-making behavior is dependent upon dopamine D2/3 signaling in corticostriatal brain regions. However, the role of D3 receptors in adaptive, goal-directed behavior has not been thoroughly investigated. By combining PET imaging with the D3-preferring radioligand [(11)C]-(+)-PHNO, pharmacology, a novel three-choice probabilistic discrimination and reversal task and computational modeling of behavior in rats, we report that naturally occurring variation in [(11)C]-(+)-PHNO receptor availability relates to specific aspects of flexible decision making. We confirm these relationships using a D3-preferring agonist, thus identifying a unique role of midbrain D3 receptors in decision-making processes. Copyright © 2016 the authors 0270-6474/16/366732-10$15.00/0.

  7. Reduced activation in ventral striatum and ventral tegmental area during probabilistic decision-making in schizophrenia.

    PubMed

    Rausch, Franziska; Mier, Daniela; Eifler, Sarah; Esslinger, Christine; Schilling, Claudia; Schirmbeck, Frederike; Englisch, Susanne; Meyer-Lindenberg, Andreas; Kirsch, Peter; Zink, Mathias

    2014-07-01

    Patients with schizophrenia suffer from deficits in monitoring and controlling their own thoughts. Within these so-called metacognitive impairments, alterations in probabilistic reasoning might be one cognitive phenomenon disposing to delusions. However, so far little is known about alterations in associated brain functionality. A previously established task for functional magnetic resonance imaging (fMRI), which requires a probabilistic decision after a variable amount of stimuli, was applied to 23 schizophrenia patients and 28 healthy controls matched for age, gender and educational levels. We compared activation patterns during decision-making under conditions of certainty versus uncertainty and evaluated the process of final decision-making in ventral striatum (VS) and ventral tegmental area (VTA). We replicated a pre-described extended cortical activation pattern during probabilistic reasoning. During final decision-making, activations in several fronto- and parietocortical areas, as well as in VS and VTA became apparent. In both of these regions schizophrenia patients showed a significantly reduced activation. These results further define the network underlying probabilistic decision-making. The observed hypo-activation in regions commonly associated with dopaminergic neurotransmission fits into current concepts of disrupted prediction error signaling in schizophrenia and suggests functional links to reward anticipation. Forthcoming studies with patients at risk for psychosis and drug-naive first episode patients are necessary to elucidate the development of these findings over time and the interplay with associated clinical symptoms. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Latent Profile Analysis of Schizotypy and Paranormal Belief: Associations with Probabilistic Reasoning Performance

    PubMed Central

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew

    2018-01-01

    This study assessed the extent to which within-individual variation in schizotypy and paranormal belief influenced performance on probabilistic reasoning tasks. A convenience sample of 725 non-clinical adults completed measures assessing schizotypy (Oxford-Liverpool Inventory of Feelings and Experiences; O-Life brief), belief in the paranormal (Revised Paranormal Belief Scale; RPBS) and probabilistic reasoning (perception of randomness, conjunction fallacy, paranormal perception of randomness, and paranormal conjunction fallacy). Latent profile analysis (LPA) identified four distinct groups: class 1, low schizotypy and low paranormal belief (43.9% of sample); class 2, moderate schizotypy and moderate paranormal belief (18.2%); class 3, moderate schizotypy (high cognitive disorganization) and low paranormal belief (29%); and class 4, moderate schizotypy and high paranormal belief (8.9%). Identification of homogeneous classes provided a nuanced understanding of the relative contribution of schizotypy and paranormal belief to differences in probabilistic reasoning performance. Multivariate analysis of covariance revealed that groups with lower levels of paranormal belief (classes 1 and 3) performed significantly better on perception of randomness, but not conjunction problems. Schizotypy had only a negligible effect on performance. Further analysis indicated that framing perception of randomness and conjunction problems in a paranormal context facilitated performance for all groups but class 4. PMID:29434562

  9. Latent Profile Analysis of Schizotypy and Paranormal Belief: Associations with Probabilistic Reasoning Performance.

    PubMed

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew

    2018-01-01

    This study assessed the extent to which within-individual variation in schizotypy and paranormal belief influenced performance on probabilistic reasoning tasks. A convenience sample of 725 non-clinical adults completed measures assessing schizotypy (Oxford-Liverpool Inventory of Feelings and Experiences; O-Life brief), belief in the paranormal (Revised Paranormal Belief Scale; RPBS) and probabilistic reasoning (perception of randomness, conjunction fallacy, paranormal perception of randomness, and paranormal conjunction fallacy). Latent profile analysis (LPA) identified four distinct groups: class 1, low schizotypy and low paranormal belief (43.9% of sample); class 2, moderate schizotypy and moderate paranormal belief (18.2%); class 3, moderate schizotypy (high cognitive disorganization) and low paranormal belief (29%); and class 4, moderate schizotypy and high paranormal belief (8.9%). Identification of homogeneous classes provided a nuanced understanding of the relative contribution of schizotypy and paranormal belief to differences in probabilistic reasoning performance. Multivariate analysis of covariance revealed that groups with lower levels of paranormal belief (classes 1 and 3) performed significantly better on perception of randomness, but not conjunction problems. Schizotypy had only a negligible effect on performance. Further analysis indicated that framing perception of randomness and conjunction problems in a paranormal context facilitated performance for all groups but class 4.

  10. Probabilistic Simulation of Multi-Scale Composite Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  11. Design of Probabilistic Random Forests with Applications to Anticancer Drug Sensitivity Prediction

    PubMed Central

    Rahman, Raziur; Haider, Saad; Ghosh, Souparno; Pal, Ranadip

    2015-01-01

    Random forests consisting of an ensemble of regression trees with equal weights are frequently used for design of predictive models. In this article, we consider an extension of the methodology by representing the regression trees in the form of probabilistic trees and analyzing the nature of heteroscedasticity. The probabilistic tree representation allows for analytical computation of confidence intervals (CIs), and the tree weight optimization is expected to provide stricter CIs with comparable performance in mean error. We approached the ensemble of probabilistic trees’ prediction from the perspectives of a mixture distribution and as a weighted sum of correlated random variables. We applied our methodology to the drug sensitivity prediction problem on synthetic and cancer cell line encyclopedia dataset and illustrated that tree weights can be selected to reduce the average length of the CI without increase in mean error. PMID:27081304

  12. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    PubMed

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Differential theory of learning for efficient neural network pattern recognition

    NASA Astrophysics Data System (ADS)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-09-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  14. Differential theory of learning for efficient neural network pattern recognition

    NASA Astrophysics Data System (ADS)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-08-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generalize well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  15. Fully Automated Prostate Magnetic Resonance Imaging and Transrectal Ultrasound Fusion via a Probabilistic Registration Metric.

    PubMed

    Sparks, Rachel; Bloch, B Nicolas; Feleppa, Ernest; Barratt, Dean; Madabhushi, Anant

    2013-03-08

    In this work, we present a novel, automated, registration method to fuse magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) images of the prostate. Our methodology consists of: (1) delineating the prostate on MRI, (2) building a probabilistic model of prostate location on TRUS, and (3) aligning the MRI prostate segmentation to the TRUS probabilistic model. TRUS-guided needle biopsy is the current gold standard for prostate cancer (CaP) diagnosis. Up to 40% of CaP lesions appear isoechoic on TRUS, hence TRUS-guided biopsy cannot reliably target CaP lesions and is associated with a high false negative rate. MRI is better able to distinguish CaP from benign prostatic tissue, but requires special equipment and training. MRI-TRUS fusion, whereby MRI is acquired pre-operatively and aligned to TRUS during the biopsy procedure, allows for information from both modalities to be used to help guide the biopsy. The use of MRI and TRUS in combination to guide biopsy at least doubles the yield of positive biopsies. Previous work on MRI-TRUS fusion has involved aligning manually determined fiducials or prostate surfaces to achieve image registration. The accuracy of these methods is dependent on the reader's ability to determine fiducials or prostate surfaces with minimal error, which is a difficult and time-consuming task. Our novel, fully automated MRI-TRUS fusion method represents a significant advance over the current state-of-the-art because it does not require manual intervention after TRUS acquisition. All necessary preprocessing steps (i.e. delineation of the prostate on MRI) can be performed offline prior to the biopsy procedure. We evaluated our method on seven patient studies, with B-mode TRUS and a 1.5 T surface coil MRI. Our method has a root mean square error (RMSE) for expertly selected fiducials (consisting of the urethra, calcifications, and the centroids of CaP nodules) of 3.39 ± 0.85 mm.

  16. Misfortune may be a blessing in disguise: Fairness perception and emotion modulate decision making.

    PubMed

    Liu, Hong-Hsiang; Hwang, Yin-Dir; Hsieh, Ming H; Hsu, Yung-Fong; Lai, Wen-Sung

    2017-08-01

    Fairness perception and equality during social interactions frequently elicit affective arousal and affect decision making. By integrating the dictator game and a probabilistic gambling task, this study aimed to investigate the effects of a negative experience induced by perceived unfairness on decision making using behavioral, model fitting, and electrophysiological approaches. Participants were randomly assigned to the neutral, harsh, or kind groups, which consisted of various asset allocation scenarios to induce different levels of perceived unfairness. The monetary gain was subsequently considered the initial asset in a negatively rewarded, probabilistic gambling task in which the participants were instructed to maintain as much asset as possible. Our behavioral results indicated that the participants in the harsh group exhibited increased levels of negative emotions but retained greater total game scores than the participants in the other two groups. Parameter estimation of a reinforcement learning model using a Bayesian approach indicated that these participants were more loss aversive and consistent in decision making. Data from simultaneous ERP recordings further demonstrated that these participants exhibited larger feedback-related negativity to unexpected outcomes in the gambling task, which suggests enhanced reward sensitivity and signaling of reward prediction error. Collectively, our study suggests that a negative experience may be an advantage in the modulation of reward-based decision making. © 2017 Society for Psychophysiological Research.

  17. Decision-making in healthy children, adolescents and adults explained by the use of increasingly complex proportional reasoning rules.

    PubMed

    Huizenga, Hilde M; Crone, Eveline A; Jansen, Brenda J

    2007-11-01

    In the standard Iowa Gambling Task (IGT), participants have to choose repeatedly from four options. Each option is characterized by a constant gain, and by the frequency and amount of a probabilistic loss. Crone and van der Molen (2004) reported that school-aged children and even adolescents show marked deficits in IGT performance. In this study, we have re-analyzed the data with a multivariate normal mixture analysis to show that these developmental changes can be explained by a shift from unidimensional to multidimensional proportional reasoning (Siegler, 1981; Jansen & van der Maas, 2002). More specifically, the results show a gradual shift with increasing age from (a) guessing with a slight tendency to consider frequency of loss to (b) focusing on frequency of loss, to (c) considering both frequency and amount of probabilistic loss. In the latter case, participants only considered options with low-frequency loss and then chose the option with the lowest amount of loss. Performance improved in a reversed task, in which punishment was placed up front and gain was delivered unexpectedly. In this reversed task, young children are guessing with already a slight tendency to consider both the frequency and amount of gain; this strategy becomes more pronounced with age. We argue that these findings have important implications for the interpretation of IGT performance, as well as for methods to analyze this performance.

  18. Perspectives of Probabilistic Inferences: Reinforcement Learning and an Adaptive Network Compared

    ERIC Educational Resources Information Center

    Rieskamp, Jorg

    2006-01-01

    The assumption that people possess a strategy repertoire for inferences has been raised repeatedly. The strategy selection learning theory specifies how people select strategies from this repertoire. The theory assumes that individuals select strategies proportional to their subjective expectations of how well the strategies solve particular…

  19. TASK-RELATED FUNCTIONAL CONNECTIVITY OF THE CAUDATE MEDIATES THE ASSOCIATION BETWEEN TRAIT MINDFULNESS AND IMPLICIT LEARNING IN OLDER ADULTS

    PubMed Central

    Stillman, Chelsea M.; You, Xiaozhen; Seaman, Kendra L.; Vaidya, Chandan J.; Howard, James H.; Howard, Darlene V.

    2016-01-01

    Accumulating evidence shows a positive relationship between mindfulness and explicit cognitive functioning, i.e., that which occurs with conscious intent and awareness. However, recent evidence suggests that there may be a negative relationship between mindfulness and implicit types of learning, or those that occur without conscious awareness or intent. Here we examined the neural mechanisms underlying the recently reported negative relationship between dispositional mindfulness and implicit probabilistic sequence learning in both younger and older adults. We tested the hypothesis that the relationship is mediated by communication, or functional connectivity, of brain regions once traditionally considered to be central to dissociable learning systems: the caudate, medial temporal lobe (MTL), and prefrontal cortex (PFC). We first replicated the negative relationship between mindfulness and implicit learning in a sample of healthy older adults (60–90 years old) who completed three event-related runs of an implicit sequence learning task. Then, using a seed-based connectivity approach, we identified task-related connectivity associated with individual differences in both learning and mindfulness. The main finding was that caudate-MTL connectivity (bilaterally) was positively correlated with learning and negatively correlated with mindfulness. Further, the strength of task-related connectivity between these regions mediated the negative relationship between mindfulness and learning. This pattern of results was limited to the older adults. Thus, at least in healthy older adults, the functional communication between two interactive learning-relevant systems can account for the relationship between mindfulness and implicit probabilistic sequence learning. PMID:27121302

  20. Application of the LEPS technique for Quantitative Precipitation Forecasting (QPF) in Southern Italy: a preliminary study

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Walko, R. L.

    2006-03-01

    This paper reports preliminary results for a Limited area model Ensemble Prediction System (LEPS), based on RAMS (Regional Atmospheric Modelling System), for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time in an operational implementation of LEPS, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that form the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12-km horizontal resolution. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecasts, LEPS forecasts are compared to a full Brute Force (BF) ensemble. This ensemble is based on RAMS, has 36 km horizontal resolution and is generated by 51 members, nested in each ECMWF-EPS member. LEPS and BF results are compared subjectively and by objective scores. Subjective analysis is based on precipitation and probability maps of case studies whereas objective analysis is made by deterministic and probabilistic scores. Scores and maps are calculated by comparing ensemble precipitation forecasts against reports from the Calabria regional raingauge network. Results show that LEPS provided better rainfall predictions than BF for all case studies selected. This strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria for these cases. To further explore the impact of local physiographic features on QPF (Quantitative Precipitation Forecasting), LEPS results are also compared with a 6-km horizontal resolution deterministic forecast. Due to local and mesoscale forcing, the high resolution forecast (Hi-Res) has better performance compared to the ensemble mean for rainfall thresholds larger than 10mm but it tends to overestimate precipitation for lower amounts. This yields larger false alarms that have a detrimental effect on objective scores for lower thresholds. To exploit the advantages of a probabilistic forecast compared to a deterministic one, the relation between the ECMWF-EPS 700 hPa geopotential height spread and LEPS performance is analyzed. Results are promising even if additional studies are required.

  1. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H. W.; Kurth, R. E.

    1991-01-01

    The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.

  2. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1993-01-01

    A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.

  3. Probabilistic simulation of the human factor in structural reliability

    NASA Astrophysics Data System (ADS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-09-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  4. Probabilistic micromechanics for metal matrix composites

    NASA Astrophysics Data System (ADS)

    Engelstad, S. P.; Reddy, J. N.; Hopkins, Dale A.

    A probabilistic micromechanics-based nonlinear analysis procedure is developed to predict and quantify the variability in the properties of high temperature metal matrix composites. Monte Carlo simulation is used to model the probabilistic distributions of the constituent level properties including fiber, matrix, and interphase properties, volume and void ratios, strengths, fiber misalignment, and nonlinear empirical parameters. The procedure predicts the resultant ply properties and quantifies their statistical scatter. Graphite copper and Silicon Carbide Titanlum Aluminide (SCS-6 TI15) unidirectional plies are considered to demonstrate the predictive capabilities. The procedure is believed to have a high potential for use in material characterization and selection to precede and assist in experimental studies of new high temperature metal matrix composites.

  5. Probabilistic Simulation of the Human Factor in Structural Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Singhal, Surendra N.

    1994-01-01

    The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).

  6. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR)

    PubMed Central

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-01-01

    Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100

  7. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    PubMed

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.

  8. The Effects of Aging on the Neural Basis of Implicit Associative Learning in a Probabilistic Triplets Learning Task

    ERIC Educational Resources Information Center

    Simon, Jessica R.; Vaidya, Chandan J.; Howard, James H., Jr.; Howard, Darlene V.

    2012-01-01

    Few studies have investigated how aging influences the neural basis of implicit associative learning, and available evidence is inconclusive. One emerging behavioral pattern is that age differences increase with practice, perhaps reflecting the involvement of different brain regions with training. Many studies report hippocampal involvement early…

  9. Personal Experiences and Beliefs in Early Probabilistic Reasoning: Implications for Research

    ERIC Educational Resources Information Center

    Sharma, Sashi

    2005-01-01

    Data reported in this paper is part of a larger study which explored form five (14 to 16 year olds) students' ideas in probability and statistics. This paper presents and discusses the ways in which students made sense of task involving independence construct obtained from the individual interviews. The findings revealed that many of the students…

  10. A Joint Probabilistic Classification Model of Relevant and Irrelevant Sentences in Mathematical Word Problems

    ERIC Educational Resources Information Center

    Cetintas, Suleyman; Si, Luo; Xin, Yan Ping; Zhang, Dake; Park, Joo Young; Tzur, Ron

    2010-01-01

    Estimating the difficulty level of math word problems is an important task for many educational applications. Identification of relevant and irrelevant sentences in math word problems is an important step for calculating the difficulty levels of such problems. This paper addresses a novel application of text categorization to identify two types of…

  11. Probabilistic Orthographic Cues to Grammatical Category in the Brain

    ERIC Educational Resources Information Center

    Arciuli, Joanne; McMahon, Katie; de Zubicaray, Greig

    2012-01-01

    What helps us determine whether a word is a noun or a verb, without conscious awareness? We report on cues in the way individual English words are spelled, and, for the first time, identify their neural correlates via functional magnetic resonance imaging (fMRI). We used a lexical decision task with trisyllabic nouns and verbs containing…

  12. What Can Student Work Show? From Playing a Game to Exploring Probability Theory

    ERIC Educational Resources Information Center

    Taylor, Merilyn; Hawera, Ngarewa

    2016-01-01

    Rich learning tasks embedded within a familiar context allow students to work like mathematicians while making sense of the mathematics. This article demonstrates how 11-12 year-old students were able to employ all of the proficiency strands while demonstrating a deep understanding of some of the "big ideas" of probabilistic thinking.

  13. Glucocorticoid Regulation of Food-Choice Behavior in Humans: Evidence from Cushing's Syndrome

    PubMed Central

    Moeller, Scott J.; Couto, Lizette; Cohen, Vanessa; Lalazar, Yelena; Makotkine, Iouri; Williams, Nia; Yehuda, Rachel; Goldstein, Rita Z.; Geer, Eliza B.

    2016-01-01

    The mechanisms by which glucocorticoids regulate food intake and resulting body mass in humans are not well-understood. One potential mechanism could involve modulation of reward processing, but human stress models examining effects of glucocorticoids on behavior contain important confounds. Here, we studied individuals with Cushing's syndrome, a rare endocrine disorder characterized by chronic excess endogenous glucocorticoids. Twenty-three patients with Cushing's syndrome (13 with active disease; 10 with disease in remission) and 15 controls with a comparably high body mass index (BMI) completed two simulated food-choice tasks (one with “explicit” task contingencies and one with “probabilistic” task contingencies), during which they indicated their objective preference for viewing high calorie food images vs. standardized pleasant, unpleasant, and neutral images. All participants also completed measures of food craving, and approximately half of the participants provided 24-h urine samples for assessment of cortisol and cortisone concentrations. Results showed that on the explicit task (but not the probabilistic task), participants with active Cushing's syndrome made fewer food-related choices than participants with Cushing's syndrome in remission, who in turn made fewer food-related choices than overweight controls. Corroborating this group effect, higher urine cortisone was negatively correlated with food-related choice in the subsample of all participants for whom these data were available. On the probabilistic task, despite a lack of group differences, higher food-related choice correlated with higher state and trait food craving in active Cushing's patients. Taken together, relative to overweight controls, Cushing's patients, particularly those with active disease, displayed a reduced vigor of responding for food rewards that was presumably attributable to glucocorticoid abnormalities. Beyond Cushing's, these results may have relevance for elucidating glucocorticoid contributions to food-seeking behavior, enhancing mechanistic understanding of weight fluctuations associated with oral glucocorticoid therapy and/or chronic stress, and informing the neurobiology of neuropsychiatric conditions marked by abnormal cortisol dynamics (e.g., major depression, Alzheimer's disease). PMID:26903790

  14. Taking a gamble or playing by the rules: Dissociable prefrontal systems implicated in probabilistic versus deterministic rule-based decisions

    PubMed Central

    Bhanji, Jamil P.; Beer, Jennifer S.; Bunge, Silvia A.

    2014-01-01

    A decision may be difficult because complex information processing is required to evaluate choices according to deterministic decision rules and/or because it is not certain which choice will lead to the best outcome in a probabilistic context. Factors that tax decision making such as decision rule complexity and low decision certainty should be disambiguated for a more complete understanding of the decision making process. Previous studies have examined the brain regions that are modulated by decision rule complexity or by decision certainty but have not examined these factors together in the context of a single task or study. In the present functional magnetic resonance imaging study, both decision rule complexity and decision certainty were varied in comparable decision tasks. Further, the level of certainty about which choice to make (choice certainty) was varied separately from certainty about the final outcome resulting from a choice (outcome certainty). Lateral prefrontal cortex, dorsal anterior cingulate cortex, and bilateral anterior insula were modulated by decision rule complexity. Anterior insula was engaged more strongly by low than high choice certainty decisions, whereas ventromedial prefrontal cortex showed the opposite pattern. These regions showed no effect of the independent manipulation of outcome certainty. The results disambiguate the influence of decision rule complexity, choice certainty, and outcome certainty on activity in diverse brain regions that have been implicated in decision making. Lateral prefrontal cortex plays a key role in implementing deterministic decision rules, ventromedial prefrontal cortex in probabilistic rules, and anterior insula in both. PMID:19781652

  15. Predictive coarse-graining

    NASA Astrophysics Data System (ADS)

    Schöberl, Markus; Zabaras, Nicholas; Koutsourelakis, Phaedon-Stelios

    2017-03-01

    We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method [1] and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extended to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo - Expectation-Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.

  16. Predictive coarse-graining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöberl, Markus, E-mail: m.schoeberl@tum.de; Zabaras, Nicholas; Department of Aerospace and Mechanical Engineering, University of Notre Dame, 365 Fitzpatrick Hall, Notre Dame, IN 46556

    We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extendedmore » to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo – Expectation–Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.« less

  17. Probabilistic Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Strack, William C.; Nagpal, Vinod K.

    2010-01-01

    PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.

  18. fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization

    NASA Astrophysics Data System (ADS)

    Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda

    2010-03-01

    Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.

  19. Heuristic and analytic processing: age trends and associations with cognitive ability and cognitive styles.

    PubMed

    Kokis, Judite V; Macpherson, Robyn; Toplak, Maggie E; West, Richard F; Stanovich, Keith E

    2002-09-01

    Developmental and individual differences in the tendency to favor analytic responses over heuristic responses were examined in children of two different ages (10- and 11-year-olds versus 13-year-olds), and of widely varying cognitive ability. Three tasks were examined that all required analytic processing to override heuristic processing: inductive reasoning, deductive reasoning under conditions of belief bias, and probabilistic reasoning. Significant increases in analytic responding with development were observed on the first two tasks. Cognitive ability was associated with analytic responding on all three tasks. Cognitive style measures such as actively open-minded thinking and need for cognition explained variance in analytic responding on the tasks after variance shared with cognitive ability had been controlled. The implications for dual-process theories of cognition and cognitive development are discussed.

  20. Probabilistic Common Spatial Patterns for Multichannel EEG Analysis

    PubMed Central

    Chen, Zhe; Gao, Xiaorong; Li, Yuanqing; Brown, Emery N.; Gao, Shangkai

    2015-01-01

    Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task. PMID:26005228

  1. Tracking Virus Particles in Fluorescence Microscopy Images Using Multi-Scale Detection and Multi-Frame Association.

    PubMed

    Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl

    2015-11-01

    Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.

  2. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    PubMed

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.

  3. Spared internal but impaired external reward prediction error signals in major depressive disorder during reinforcement learning.

    PubMed

    Bakic, Jasmina; Pourtois, Gilles; Jepma, Marieke; Duprat, Romain; De Raedt, Rudi; Baeken, Chris

    2017-01-01

    Major depressive disorder (MDD) creates debilitating effects on a wide range of cognitive functions, including reinforcement learning (RL). In this study, we sought to assess whether reward processing as such, or alternatively the complex interplay between motivation and reward might potentially account for the abnormal reward-based learning in MDD. A total of 35 treatment resistant MDD patients and 44 age matched healthy controls (HCs) performed a standard probabilistic learning task. RL was titrated using behavioral, computational modeling and event-related brain potentials (ERPs) data. MDD patients showed comparable learning rate compared to HCs. However, they showed decreased lose-shift responses as well as blunted subjective evaluations of the reinforcers used during the task, relative to HCs. Moreover, MDD patients showed normal internal (at the level of error-related negativity, ERN) but abnormal external (at the level of feedback-related negativity, FRN) reward prediction error (RPE) signals during RL, selectively when additional efforts had to be made to establish learning. Collectively, these results lend support to the assumption that MDD does not impair reward processing per se during RL. Instead, it seems to alter the processing of the emotional value of (external) reinforcers during RL, when additional intrinsic motivational processes have to be engaged. © 2016 Wiley Periodicals, Inc.

  4. Developmental Change in Feedback Processing as Reflected by Phasic Heart Rate Changes

    ERIC Educational Resources Information Center

    Crone, Eveline A.; Jennings, J. Richard; Van der Molen, Maurits W.

    2004-01-01

    Heart rate was recorded from 3 age groups (8-10, 12, and 20-26 years) while they performed a probabilistic learning task. Stimuli had to be sorted by pressing a left versus right key, followed by positive or negative feedback. Adult heart rate slowed following negative feedback when stimuli were consistently mapped onto the left or right key…

  5. A probabilistic model of debris-flow delivery to stream channels, demonstrated for the Coast Range of Oregon, USA

    Treesearch

    Daniel J. Miller; Kelly M. Burnett

    2008-01-01

    Debris flows are important geomorphic agents in mountainous terrains that shape channel environments and add a dynamic element to sediment supply and channel disturbance. Identification of channels susceptible to debris-flow inputs of sediment and organic debris, and quantification of the likelihood and magnitude of those inputs, are key tasks for characterizing...

  6. Counterfactually Mediated Emotions: A Developmental Study of Regret and Relief in a Probabilistic Gambling Task

    ERIC Educational Resources Information Center

    Habib, M.; Cassotti, M.; Borst, G.; Simon, G.; Pineau, A.; Houde, O.; Moutier, S.

    2012-01-01

    Regret and relief are related to counterfactual thinking and rely on comparison processes between what has been and what might have been. In this article, we study the development of regret and relief from late childhood to adulthood (11.2-20.2 years), and we examine how these two emotions affect individuals' willingness to retrospectively…

  7. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    NASA Astrophysics Data System (ADS)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  8. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multifactor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  9. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multi-factor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  10. Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foye, Kevin C.; Soong, Te-Yang

    2012-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the wastemore » mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this specific example, relative density, which can be determined through field measurements, was selected as the field quality control parameter for waste placement. This technique can be extended to include a rigorous performance-based methodology using other parameters (void space criteria, debris-soil mix ratio, pre-loading, etc.). As shown in this example, each parameter range, or sets of parameter ranges can be selected such that they can result in an acceptable, long-term differential settlement according to the probabilistic model. The methodology can also be used to re-evaluate the long-term differential settlement behavior at closed land disposal facilities to identify, if any, problematic facilities so that remedial action (e.g., reinforcement of upper and intermediate waste layers) can be implemented. Considering the inherent spatial variability in waste and earth materials and the need for engineers to apply sound quantitative practices to engineering analysis, it is important to apply the available probabilistic techniques to problems of differential settlement. One such method to implement probability-based differential settlement analyses for the design of landfill final covers has been presented. The design evaluation technique presented is one tool to bridge the gap from deterministic practice to probabilistic practice. (authors)« less

  11. A probabilistic topic model for clinical risk stratification from electronic health records.

    PubMed

    Huang, Zhengxing; Dong, Wei; Duan, Huilong

    2015-12-01

    Risk stratification aims to provide physicians with the accurate assessment of a patient's clinical risk such that an individualized prevention or management strategy can be developed and delivered. Existing risk stratification techniques mainly focus on predicting the overall risk of an individual patient in a supervised manner, and, at the cohort level, often offer little insight beyond a flat score-based segmentation from the labeled clinical dataset. To this end, in this paper, we propose a new approach for risk stratification by exploring a large volume of electronic health records (EHRs) in an unsupervised fashion. Along this line, this paper proposes a novel probabilistic topic modeling framework called probabilistic risk stratification model (PRSM) based on Latent Dirichlet Allocation (LDA). The proposed PRSM recognizes a patient clinical state as a probabilistic combination of latent sub-profiles, and generates sub-profile-specific risk tiers of patients from their EHRs in a fully unsupervised fashion. The achieved stratification results can be easily recognized as high-, medium- and low-risk, respectively. In addition, we present an extension of PRSM, called weakly supervised PRSM (WS-PRSM) by incorporating minimum prior information into the model, in order to improve the risk stratification accuracy, and to make our models highly portable to risk stratification tasks of various diseases. We verify the effectiveness of the proposed approach on a clinical dataset containing 3463 coronary heart disease (CHD) patient instances. Both PRSM and WS-PRSM were compared with two established supervised risk stratification algorithms, i.e., logistic regression and support vector machine, and showed the effectiveness of our models in risk stratification of CHD in terms of the Area Under the receiver operating characteristic Curve (AUC) analysis. As well, in comparison with PRSM, WS-PRSM has over 2% performance gain, on the experimental dataset, demonstrating that incorporating risk scoring knowledge as prior information can improve the performance in risk stratification. Experimental results reveal that our models achieve competitive performance in risk stratification in comparison with existing supervised approaches. In addition, the unsupervised nature of our models makes them highly portable to the risk stratification tasks of various diseases. Moreover, patient sub-profiles and sub-profile-specific risk tiers generated by our models are coherent and informative, and provide significant potential to be explored for the further tasks, such as patient cohort analysis. We hypothesize that the proposed framework can readily meet the demand for risk stratification from a large volume of EHRs in an open-ended fashion. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Dynamic shaping of dopamine signals during probabilistic Pavlovian conditioning.

    PubMed

    Hart, Andrew S; Clark, Jeremy J; Phillips, Paul E M

    2015-01-01

    Cue- and reward-evoked phasic dopamine activity during Pavlovian and operant conditioning paradigms is well correlated with reward-prediction errors from formal reinforcement learning models, which feature teaching signals in the form of discrepancies between actual and expected reward outcomes. Additionally, in learning tasks where conditioned cues probabilistically predict rewards, dopamine neurons show sustained cue-evoked responses that are correlated with the variance of reward and are maximal to cues predicting rewards with a probability of 0.5. Therefore, it has been suggested that sustained dopamine activity after cue presentation encodes the uncertainty of impending reward delivery. In the current study we examined the acquisition and maintenance of these neural correlates using fast-scan cyclic voltammetry in rats implanted with carbon fiber electrodes in the nucleus accumbens core during probabilistic Pavlovian conditioning. The advantage of this technique is that we can sample from the same animal and recording location throughout learning with single trial resolution. We report that dopamine release in the nucleus accumbens core contains correlates of both expected value and variance. A quantitative analysis of these signals throughout learning, and during the ongoing updating process after learning in probabilistic conditions, demonstrates that these correlates are dynamically encoded during these phases. Peak CS-evoked responses are correlated with expected value and predominate during early learning while a variance-correlated sustained CS signal develops during the post-asymptotic updating phase. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  14. Does expert knowledge improve automatic probabilistic classification of gait joint motion patterns in children with cerebral palsy?

    PubMed Central

    Papageorgiou, Eirini; Nieuwenhuys, Angela; Desloovere, Kaat

    2017-01-01

    Background This study aimed to improve the automatic probabilistic classification of joint motion gait patterns in children with cerebral palsy by using the expert knowledge available via a recently developed Delphi-consensus study. To this end, this study applied both Naïve Bayes and Logistic Regression classification with varying degrees of usage of the expert knowledge (expert-defined and discretized features). A database of 356 patients and 1719 gait trials was used to validate the classification performance of eleven joint motions. Hypotheses Two main hypotheses stated that: (1) Joint motion patterns in children with CP, obtained through a Delphi-consensus study, can be automatically classified following a probabilistic approach, with an accuracy similar to clinical expert classification, and (2) The inclusion of clinical expert knowledge in the selection of relevant gait features and the discretization of continuous features increases the performance of automatic probabilistic joint motion classification. Findings This study provided objective evidence supporting the first hypothesis. Automatic probabilistic gait classification using the expert knowledge available from the Delphi-consensus study resulted in accuracy (91%) similar to that obtained with two expert raters (90%), and higher accuracy than that obtained with non-expert raters (78%). Regarding the second hypothesis, this study demonstrated that the use of more advanced machine learning techniques such as automatic feature selection and discretization instead of expert-defined and discretized features can result in slightly higher joint motion classification performance. However, the increase in performance is limited and does not outweigh the additional computational cost and the higher risk of loss of clinical interpretability, which threatens the clinical acceptance and applicability. PMID:28570616

  15. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  16. Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.

  17. Probabilistic Modeling of High-Temperature Material Properties of a 5-Harness 0/90 Sylramic Fiber/ CVI-SiC/ MI-SiC Woven Composite

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Tong, Michael; Murthy, P. L. N.; Mital, Subodh

    1998-01-01

    An integrated probabilistic approach has been developed to assess composites for high temperature applications. This approach was used to determine thermal and mechanical properties and their probabilistic distributions of a 5-harness 0/90 Sylramic fiber/CVI-SiC/Mi-SiC woven Ceramic Matrix Composite (CMC) at high temperatures. The purpose of developing this approach was to generate quantitative probabilistic information on this CMC to help complete the evaluation for its potential application for HSCT combustor liner. This approach quantified the influences of uncertainties inherent in constituent properties called primitive variables on selected key response variables of the CMC at 2200 F. The quantitative information is presented in the form of Cumulative Density Functions (CDFs). Probability Density Functions (PDFS) and primitive variable sensitivities on response. Results indicate that the scatters in response variables were reduced by 30-50% when the uncertainties in the primitive variables, which showed the most influence, were reduced by 50%.

  18. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Probabilistic Reinforcement Learning in Adults with Autism Spectrum Disorders

    PubMed Central

    Solomon, Marjorie; Smith, Anne C.; Frank, Michael J.; Ly, Stanford; Carter, Cameron S.

    2017-01-01

    Background Autism spectrum disorders (ASDs) can be conceptualized as disorders of learning, however there have been few experimental studies taking this perspective. Methods We examined the probabilistic reinforcement learning performance of 28 adults with ASDs and 30 typically developing adults on a task requiring learning relationships between three stimulus pairs consisting of Japanese characters with feedback that was valid with different probabilities (80%, 70%, and 60%). Both univariate and Bayesian state–space data analytic methods were employed. Hypotheses were based on the extant literature as well as on neurobiological and computational models of reinforcement learning. Results Both groups learned the task after training. However, there were group differences in early learning in the first task block where individuals with ASDs acquired the most frequently accurately reinforced stimulus pair (80%) comparably to typically developing individuals; exhibited poorer acquisition of the less frequently reinforced 70% pair as assessed by state–space learning curves; and outperformed typically developing individuals on the near chance (60%) pair. Individuals with ASDs also demonstrated deficits in using positive feedback to exploit rewarded choices. Conclusions Results support the contention that individuals with ASDs are slower learners. Based on neurobiology and on the results of computational modeling, one interpretation of this pattern of findings is that impairments are related to deficits in flexible updating of reinforcement history as mediated by the orbito-frontal cortex, with spared functioning of the basal ganglia. This hypothesis about the pathophysiology of learning in ASDs can be tested using functional magnetic resonance imaging. PMID:21425243

  20. Probabilistic Reward- and Punishment-based Learning in Opioid Addiction: Experimental and Computational Data

    PubMed Central

    Myers, Catherine E.; Sheynin, Jony; Baldson, Tarryn; Luzardo, Andre; Beck, Kevin D.; Hogarth, Lee; Haber, Paul; Moustafa, Ahmed A.

    2016-01-01

    Addiction is the continuation of a habit in spite of negative consequences. A vast literature gives evidence that this poor decision-making behavior in individuals addicted to drugs also generalizes to laboratory decision making tasks, suggesting that the impairment in decision-making is not limited to decisions about taking drugs. In the current experiment, opioid-addicted individuals and matched controls with no history of illicit drug use were administered a probabilistic classification task that embeds both reward-based and punishment-based learning trials, and a computational model of decision making was applied to understand the mechanisms describing individuals’ performance on the task. Although behavioral results showed thatopioid-addicted individuals performed as well as controls on both reward- and punishment-based learning, the modeling results suggested subtle differences in how decisions were made between the two groups. Specifically, the opioid-addicted group showed decreased tendency to repeat prior responses, meaning that they were more likely to “chase reward” when expectancies were violated, whereas controls were more likely to stick with a previously-successful response rule, despite occasional expectancy violations. This tendency to chase short-term reward, potentially at the expense of developing rules that maximize reward over the long term, may be a contributing factor to opioid addiction. Further work is indicated to better understand whether this tendency arises as a result of brain changes in the wake of continued opioid use/abuse, or might be a pre-existing factor that may contribute to risk for addiction. PMID:26381438

  1. Probabilistic fault tree analysis of a radiation treatment system.

    PubMed

    Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter

    2007-12-01

    Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.

  2. Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi

    PubMed Central

    Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni

    2016-01-01

    How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a “specialized” domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the “community structure” of the ToH and their difficulties in executing so-called “counterintuitive” movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand—a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits. PMID:27074140

  3. Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi.

    PubMed

    Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni

    2016-04-01

    How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a "specialized" domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the "community structure" of the ToH and their difficulties in executing so-called "counterintuitive" movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand-a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits.

  4. Applying quantitative bias analysis to estimate the plausible effects of selection bias in a cluster randomised controlled trial: secondary analysis of the Primary care Osteoarthritis Screening Trial (POST).

    PubMed

    Barnett, L A; Lewis, M; Mallen, C D; Peat, G

    2017-12-04

    Selection bias is a concern when designing cluster randomised controlled trials (c-RCT). Despite addressing potential issues at the design stage, bias cannot always be eradicated from a trial design. The application of bias analysis presents an important step forward in evaluating whether trial findings are credible. The aim of this paper is to give an example of the technique to quantify potential selection bias in c-RCTs. This analysis uses data from the Primary care Osteoarthritis Screening Trial (POST). The primary aim of this trial was to test whether screening for anxiety and depression, and providing appropriate care for patients consulting their GP with osteoarthritis would improve clinical outcomes. Quantitative bias analysis is a seldom-used technique that can quantify types of bias present in studies. Due to lack of information on the selection probability, probabilistic bias analysis with a range of triangular distributions was also used, applied at all three follow-up time points; 3, 6, and 12 months post consultation. A simple bias analysis was also applied to the study. Worse pain outcomes were observed among intervention participants than control participants (crude odds ratio at 3, 6, and 12 months: 1.30 (95% CI 1.01, 1.67), 1.39 (1.07, 1.80), and 1.17 (95% CI 0.90, 1.53), respectively). Probabilistic bias analysis suggested that the observed effect became statistically non-significant if the selection probability ratio was between 1.2 and 1.4. Selection probability ratios of > 1.8 were needed to mask a statistically significant benefit of the intervention. The use of probabilistic bias analysis in this c-RCT suggested that worse outcomes observed in the intervention arm could plausibly be attributed to selection bias. A very large degree of selection of bias was needed to mask a beneficial effect of intervention making this interpretation less plausible.

  5. Entropy of Movement Outcome in Space-Time.

    PubMed

    Lai, Shih-Chiung; Hsieh, Tsung-Yu; Newell, Karl M

    2015-07-01

    Information entropy of the joint spatial and temporal (space-time) probability of discrete movement outcome was investigated in two experiments as a function of different movement strategies (space-time, space, and time instructional emphases), task goals (point-aiming and target-aiming) and movement speed-accuracy constraints. The variance of the movement spatial and temporal errors was reduced by instructional emphasis on the respective spatial or temporal dimension, but increased on the other dimension. The space-time entropy was lower in targetaiming task than the point aiming task but did not differ between instructional emphases. However, the joint probabilistic measure of spatial and temporal entropy showed that spatial error is traded for timing error in tasks with space-time criteria and that the pattern of movement error depends on the dimension of the measurement process. The unified entropy measure of movement outcome in space-time reveals a new relation for the speed-accuracy.

  6. Probabilistic motor sequence learning in a virtual reality serial reaction time task.

    PubMed

    Sense, Florian; van Rijn, Hedderik

    2018-01-01

    The serial reaction time task is widely used to study learning and memory. The task is traditionally administered by showing target positions on a computer screen and collecting responses using a button box or keyboard. By comparing response times to random or sequenced items or by using different transition probabilities, various forms of learning can be studied. However, this traditional laboratory setting limits the number of possible experimental manipulations. Here, we present a virtual reality version of the serial reaction time task and show that learning effects emerge as expected despite the novel way in which responses are collected. We also show that response times are distributed as expected. The current experiment was conducted in a blank virtual reality room to verify these basic principles. For future applications, the technology can be used to modify the virtual reality environment in any conceivable way, permitting a wide range of previously impossible experimental manipulations.

  7. Wave scheduling - Decentralized scheduling of task forces in multicomputers

    NASA Technical Reports Server (NTRS)

    Van Tilborg, A. M.; Wittie, L. D.

    1984-01-01

    Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.

  8. Delay and probability discounting of sexual and monetary outcomes in individuals with cocaine use disorders and matched controls.

    PubMed

    Johnson, Matthew W; Johnson, Patrick S; Herrmann, Evan S; Sweeney, Mary M

    2015-01-01

    Individuals with cocaine use disorders are disproportionately affected by HIV/AIDS, partly due to higher rates of unprotected sex. Recent research suggests delay discounting of condom use is a factor in sexual HIV risk. Delay discounting is a behavioral economic concept describing how delaying an event reduces that event's value or impact on behavior. Probability discounting is a related concept describing how the uncertainty of an event decreases its impact on behavior. Individuals with cocaine use disorders (n = 23) and matched non-cocaine-using controls (n = 24) were compared in decision-making tasks involving hypothetical outcomes: delay discounting of condom-protected sex (Sexual Delay Discounting Task), delay discounting of money, the effect of sexually transmitted infection (STI) risk on likelihood of condom use (Sexual Probability Discounting Task), and probability discounting of money. The Cocaine group discounted delayed condom-protected sex (i.e., were more likely to have unprotected sex vs. wait for a condom) significantly more than controls in two of four Sexual Delay Discounting Task partner conditions. The Cocaine group also discounted delayed money (i.e., preferred smaller immediate amounts over larger delayed amounts) significantly more than controls. In the Sexual Probability Discounting Task, both groups showed sensitivity to STI risk, however the groups did not differ. The Cocaine group did not consistently discount probabilistic money more or less than controls. Steeper discounting of delayed, but not probabilistic, sexual outcomes may contribute to greater rates of sexual HIV risk among individuals with cocaine use disorders. Probability discounting of sexual outcomes may contribute to risk of unprotected sex in both groups. Correlations showed sexual and monetary results were unrelated, for both delay and probability discounting. The results highlight the importance of studying specific behavioral processes (e.g., delay and probability discounting) with respect to specific outcomes (e.g., monetary and sexual) to understand decision making in problematic behavior.

  9. Delay and Probability Discounting of Sexual and Monetary Outcomes in Individuals with Cocaine Use Disorders and Matched Controls

    PubMed Central

    Johnson, Matthew W.; Johnson, Patrick S.; Herrmann, Evan S.; Sweeney, Mary M.

    2015-01-01

    Individuals with cocaine use disorders are disproportionately affected by HIV/AIDS, partly due to higher rates of unprotected sex. Recent research suggests delay discounting of condom use is a factor in sexual HIV risk. Delay discounting is a behavioral economic concept describing how delaying an event reduces that event’s value or impact on behavior. Probability discounting is a related concept describing how the uncertainty of an event decreases its impact on behavior. Individuals with cocaine use disorders (n = 23) and matched non-cocaine-using controls (n = 24) were compared in decision-making tasks involving hypothetical outcomes: delay discounting of condom-protected sex (Sexual Delay Discounting Task), delay discounting of money, the effect of sexually transmitted infection (STI) risk on likelihood of condom use (Sexual Probability Discounting Task), and probability discounting of money. The Cocaine group discounted delayed condom-protected sex (i.e., were more likely to have unprotected sex vs. wait for a condom) significantly more than controls in two of four Sexual Delay Discounting Task partner conditions. The Cocaine group also discounted delayed money (i.e., preferred smaller immediate amounts over larger delayed amounts) significantly more than controls. In the Sexual Probability Discounting Task, both groups showed sensitivity to STI risk, however the groups did not differ. The Cocaine group did not consistently discount probabilistic money more or less than controls. Steeper discounting of delayed, but not probabilistic, sexual outcomes may contribute to greater rates of sexual HIV risk among individuals with cocaine use disorders. Probability discounting of sexual outcomes may contribute to risk of unprotected sex in both groups. Correlations showed sexual and monetary results were unrelated, for both delay and probability discounting. The results highlight the importance of studying specific behavioral processes (e.g., delay and probability discounting) with respect to specific outcomes (e.g., monetary and sexual) to understand decision making in problematic behavior. PMID:26017273

  10. Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge.

    PubMed

    Lau, Jey Han; Clark, Alexander; Lappin, Shalom

    2017-07-01

    The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic. Copyright © 2016 Cognitive Science Society, Inc.

  11. Incorporating psychological influences in probabilistic cost analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kujawski, Edouard; Alvaro, Mariana; Edwards, William

    2004-01-08

    Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations thatmore » are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for allocating baseline budgets and contingencies. Given the scope and magnitude of the cost-overrun problem, the benefits are likely to be significant.« less

  12. Uncertainty Reduction for Stochastic Processes on Complex Networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  13. Continuous track paths reveal additive evidence integration in multistep decision making.

    PubMed

    Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom

    2017-10-03

    Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.

  14. Constraints on decision making: implications from genetics, personality, and addiction.

    PubMed

    Baker, Travis E; Stockwell, Tim; Holroyd, Clay B

    2013-09-01

    An influential neurocomputational theory of the biological mechanisms of decision making, the "basal ganglia go/no-go model," holds that individual variability in decision making is determined by differences in the makeup of a striatal system for approach and avoidance learning. The model has been tested empirically with the probabilistic selection task (PST), which determines whether individuals learn better from positive or negative feedback. In accordance with the model, in the present study we examined whether an individual's ability to learn from positive and negative reinforcement can be predicted by genetic factors related to the midbrain dopamine system. We also asked whether psychiatric and personality factors related to substance dependence and dopamine affect PST performance. Although we found characteristics that predicted individual differences in approach versus avoidance learning, these observations were qualified by additional findings that appear inconsistent with the predictions of the go/no-go model. These results highlight a need for future research to validate the PST as a measure of basal ganglia reward learning.

  15. Reinforcement Learning Deficits in People with Schizophrenia Persist after Extended Trials

    PubMed Central

    Cicero, David C.; Martin, Elizabeth A.; Becker, Theresa M.; Kerns, John G.

    2014-01-01

    Previous research suggests that people with schizophrenia have difficulty learning from positive feedback and when learning needs to occur rapidly. However, they seem to have relatively intact learning from negative feedback when learning occurs gradually. Participants are typically given a limited amount of acquisition trials to learn the reward contingencies and then tested about what they learned. The current study examined whether participants with schizophrenia continue to display these deficits when given extra time to learn the contingences. Participants with schizophrenia and matched healthy controls completed the Probabilistic Selection Task, which measures positive and negative feedback learning separately. Participants with schizophrenia showed a deficit in learning from both positive and negative feedback. These reward learning deficits persisted even if people with schizophrenia are given extra time (up to 10 blocks of 60 trials) to learn the reward contingencies. These results suggest that the observed deficits cannot be attributed solely to slower learning and instead reflect a specific deficit in reinforcement learning. PMID:25172610

  16. Reinforcement learning deficits in people with schizophrenia persist after extended trials.

    PubMed

    Cicero, David C; Martin, Elizabeth A; Becker, Theresa M; Kerns, John G

    2014-12-30

    Previous research suggests that people with schizophrenia have difficulty learning from positive feedback and when learning needs to occur rapidly. However, they seem to have relatively intact learning from negative feedback when learning occurs gradually. Participants are typically given a limited amount of acquisition trials to learn the reward contingencies and then tested about what they learned. The current study examined whether participants with schizophrenia continue to display these deficits when given extra time to learn the contingences. Participants with schizophrenia and matched healthy controls completed the Probabilistic Selection Task, which measures positive and negative feedback learning separately. Participants with schizophrenia showed a deficit in learning from both positive feedback and negative feedback. These reward learning deficits persisted even if people with schizophrenia are given extra time (up to 10 blocks of 60 trials) to learn the reward contingencies. These results suggest that the observed deficits cannot be attributed solely to slower learning and instead reflect a specific deficit in reinforcement learning. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Differential coding of reward and movement information in the dorsomedial striatal direct and indirect pathways.

    PubMed

    Shin, Jung Hwan; Kim, Dohoung; Jung, Min Whan

    2018-01-26

    The direct and indirect pathways of the basal ganglia have long been thought to mediate behavioral promotion and inhibition, respectively. However, this classic dichotomous model has been recently challenged. To better understand neural processes underlying reward-based learning and movement control, we recorded from direct (dSPNs) and indirect (iSPNs) pathway spiny projection neurons in the dorsomedial striatum of D1-Cre and D2-Cre mice performing a probabilistic Pavlovian conditioning task. dSPNs tend to increase activity while iSPNs decrease activity as a function of reward value, suggesting the striatum represents value in the relative activity levels of dSPNs versus iSPNs. Lick offset-related activity increase is largely dSPN selective, suggesting dSPN involvement in suppressing ongoing licking behavior. Rapid responses to negative outcome and previous reward-related responses are more frequent among iSPNs than dSPNs, suggesting stronger contributions of iSPNs to outcome-dependent behavioral adjustment. These findings provide new insights into striatal neural circuit operations.

  18. Intact implicit learning in autism spectrum conditions.

    PubMed

    Brown, Jamie; Aczel, Balazs; Jiménez, Luis; Kaufman, Scott Barry; Grant, Kate Plaisted

    2010-09-01

    Individuals with autism spectrum condition (ASC) have diagnostic impairments in skills that are associated with an implicit acquisition; however, it is not clear whether ASC individuals show specific implicit learning deficits. We compared ASC and typically developing (TD) individuals matched for IQ on five learning tasks: four implicit learning tasks--contextual cueing, serial reaction time, artificial grammar learning, and probabilistic classification learning tasks--that used procedures expressly designed to minimize the use of explicit strategies, and one comparison explicit learning task, paired associates learning. We found implicit learning to be intact in ASC. Beyond no evidence of differences, there was evidence of statistical equivalence between the groups on all the implicit learning tasks. This was not a consequence of compensation by explicit learning ability or IQ. Furthermore, there was no evidence to relate implicit learning to ASC symptomatology. We conclude that implicit mechanisms are preserved in ASC and propose that it is disruption by other atypical processes that impact negatively on the development of skills associated with an implicit acquisition.

  19. Decision-Making in Healthy Children, Adolescents and Adults Explained by the Use of Increasingly Complex Proportional Reasoning Rules

    ERIC Educational Resources Information Center

    Huizenga, Hilde M.; Crone, Eveline A.; Jansen, Brenda J.

    2007-01-01

    In the standard Iowa Gambling Task (IGT), participants have to choose repeatedly from four options. Each option is characterized by a constant gain, and by the frequency and amount of a probabilistic loss. Crone and van der Molen (2004) reported that school-aged children and even adolescents show marked deficits in IGT performance. In this study,…

  20. Probabilistic Discounting of Hypothetical Monetary Gains: University Students Differ in How They Discount "Won" and "Owed" Money

    ERIC Educational Resources Information Center

    Weatherly, Jeffrey N.; Derenne, Adam

    2013-01-01

    The present study tested whether participants would discount "won" money differently than they would "owed" money in a probability-discounting task. Participants discounted $1000 or $100,000 that they had won in a sweepstakes or that was owed to them using the multiple-choice (Experiment 1) or fill-in-the-blank (Experiment 2) method of collecting…

  1. Mathematics Test, Numerical Task and Mathematics Course as Determinants of Anxiety toward Math on College Students

    ERIC Educational Resources Information Center

    García-Santillán, Arturo; Rojas-Kramer, Carlos; Moreno-García, Elena; Ramos-Hernández, Jesica

    2017-01-01

    The aim of this study was to determine the variables that explain the anxiety towards mathematics in college students. For this purpose, we used the scale RMARS that integrate 25 items. The sample is non-probabilistic by convenience and the questionnaire was applied to 100 student's enrollment in the "Instituto Tecnológico de Veracruz"…

  2. Fuzzy set methods for object recognition in space applications

    NASA Technical Reports Server (NTRS)

    Keller, James M.

    1992-01-01

    Progress on the following tasks is reported: feature calculation; membership calculation; clustering methods (including initial experiments on pose estimation); and acquisition of images (including camera calibration information for digitization of model). The report consists of 'stand alone' sections, describing the activities in each task. We would like to highlight the fact that during this quarter, we believe that we have made a major breakthrough in the area of fuzzy clustering. We have discovered a method to remove the probabilistic constraints that the sum of the memberships across all classes must add up to 1 (as in the fuzzy c-means). A paper, describing this approach, is included.

  3. Quantifying interindividual variability and asymmetry of face-selective regions: a probabilistic functional atlas.

    PubMed

    Zhen, Zonglei; Yang, Zetian; Huang, Lijie; Kong, Xiang-Zhen; Wang, Xu; Dang, Xiaobin; Huang, Yangyue; Song, Yiying; Liu, Jia

    2015-06-01

    Face-selective regions (FSRs) are among the most widely studied functional regions in the human brain. However, individual variability of the FSRs has not been well quantified. Here we use functional magnetic resonance imaging (fMRI) to localize the FSRs and quantify their spatial and functional variabilities in 202 healthy adults. The occipital face area (OFA), posterior and anterior fusiform face areas (pFFA and aFFA), posterior continuation of the superior temporal sulcus (pcSTS), and posterior and anterior STS (pSTS and aSTS) were delineated for each individual with a semi-automated procedure. A probabilistic atlas was constructed to characterize their interindividual variability, revealing that the FSRs were highly variable in location and extent across subjects. The variability of FSRs was further quantified on both functional (i.e., face selectivity) and spatial (i.e., volume, location of peak activation, and anatomical location) features. Considerable interindividual variability and rightward asymmetry were found in all FSRs on these features. Taken together, our work presents the first effort to characterize comprehensively the variability of FSRs in a large sample of healthy subjects, and invites future work on the origin of the variability and its relation to individual differences in behavioral performance. Moreover, the probabilistic functional atlas will provide an adequate spatial reference for mapping the face network. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment

    PubMed Central

    Legenstein, Robert; Maass, Wolfgang

    2014-01-01

    It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information. PMID:25340749

  5. Risk assessment for construction projects of transport infrastructure objects

    NASA Astrophysics Data System (ADS)

    Titarenko, Boris

    2017-10-01

    The paper analyzes and compares different methods of risk assessment for construction projects of transport objects. The management of such type of projects demands application of special probabilistic methods due to large level of uncertainty of their implementation. Risk management in the projects requires the use of probabilistic and statistical methods. The aim of the work is to develop a methodology for using traditional methods in combination with robust methods that allow obtaining reliable risk assessments in projects. The robust approach is based on the principle of maximum likelihood and in assessing the risk allows the researcher to obtain reliable results in situations of great uncertainty. The application of robust procedures allows to carry out a quantitative assessment of the main risk indicators of projects when solving the tasks of managing innovation-investment projects. Calculation of damage from the onset of a risky event is possible by any competent specialist. And an assessment of the probability of occurrence of a risky event requires the involvement of special probabilistic methods based on the proposed robust approaches. Practice shows the effectiveness and reliability of results. The methodology developed in the article can be used to create information technologies and their application in automated control systems for complex projects.

  6. A probabilistic approach to information retrieval in heterogeneous databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, A.; Segev, A.

    During the post decade, organizations have increased their scope and operations beyond their traditional geographic boundaries. At the same time, they have adopted heterogeneous and incompatible information systems independent of each other without a careful consideration that one day they may need to be integrated. As a result of this diversity, many important business applications today require access to data stored in multiple autonomous databases. This paper examines a problem of inter-database information retrieval in a heterogeneous environment, where conventional techniques are no longer efficient. To solve the problem, broader definitions for join, union, intersection and selection operators are proposed.more » Also, a probabilistic method to specify the selectivity of these operators is discussed. An algorithm to compute these probabilities is provided in pseudocode.« less

  7. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  8. Multisensory decisions provide support for probabilistic number representations.

    PubMed

    Kanitscheider, Ingmar; Brown, Amanda; Pouget, Alexandre; Churchland, Anne K

    2015-06-01

    A large body of evidence suggests that an approximate number sense allows humans to estimate numerosity in sensory scenes. This ability is widely observed in humans, including those without formal mathematical training. Despite this, many outstanding questions remain about the nature of the numerosity representation in the brain. Specifically, it is not known whether approximate numbers are represented as scalar estimates of numerosity or, alternatively, as probability distributions over numerosity. In the present study, we used a multisensory decision task to distinguish these possibilities. We trained human subjects to decide whether a test stimulus had a larger or smaller numerosity compared with a fixed reference. Depending on the trial, the numerosity was presented as either a sequence of visual flashes or a sequence of auditory tones, or both. To test for a probabilistic representation, we varied the reliability of the stimulus by adding noise to the visual stimuli. In accordance with a probabilistic representation, we observed a significant improvement in multisensory compared with unisensory trials. Furthermore, a trial-by-trial analysis revealed that although individual subjects showed strategic differences in how they leveraged auditory and visual information, all subjects exploited the reliability of unisensory cues. An alternative, nonprobabilistic model, in which subjects combined cues without regard for reliability, was not able to account for these trial-by-trial choices. These findings provide evidence that the brain relies on a probabilistic representation for numerosity decisions. Copyright © 2015 the American Physiological Society.

  9. Lung Cancer Assistant: a hybrid clinical decision support application for lung cancer care.

    PubMed

    Sesen, M Berkan; Peake, Michael D; Banares-Alcantara, Rene; Tse, Donald; Kadir, Timor; Stanley, Roz; Gleeson, Fergus; Brady, Michael

    2014-09-06

    Multidisciplinary team (MDT) meetings are becoming the model of care for cancer patients worldwide. While MDTs have improved the quality of cancer care, the meetings impose substantial time pressure on the members, who generally attend several such MDTs. We describe Lung Cancer Assistant (LCA), a clinical decision support (CDS) prototype designed to assist the experts in the treatment selection decisions in the lung cancer MDTs. A novel feature of LCA is its ability to provide rule-based and probabilistic decision support within a single platform. The guideline-based CDS is based on clinical guideline rules, while the probabilistic CDS is based on a Bayesian network trained on the English Lung Cancer Audit Database (LUCADA). We assess rule-based and probabilistic recommendations based on their concordances with the treatments recorded in LUCADA. Our results reveal that the guideline rule-based recommendations perform well in simulating the recorded treatments with exact and partial concordance rates of 0.57 and 0.79, respectively. On the other hand, the exact and partial concordance rates achieved with probabilistic results are relatively poorer with 0.27 and 0.76. However, probabilistic decision support fulfils a complementary role in providing accurate survival estimations. Compared to recorded treatments, both CDS approaches promote higher resection rates and multimodality treatments.

  10. Probabilistic simple sticker systems

    NASA Astrophysics Data System (ADS)

    Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2017-04-01

    A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.

  11. Are there Benefits to Combining Regional Probabalistic Survey and Historic Targeted Environmental Monitoring Data to Improve Our Understanding of Overall Regional Estuary Environmental Status?

    NASA Astrophysics Data System (ADS)

    Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.

    2016-02-01

    A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.

  12. Probabilistic computer model of optimal runway turnoffs

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.

    1985-01-01

    Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.

  13. Probabilistic Characterization of Adversary Behavior in Cyber Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers, C A; Powers, S S; Faissol, D M

    2009-10-08

    The objective of this SMS effort is to provide a probabilistic characterization of adversary behavior in cyber security. This includes both quantitative (data analysis) and qualitative (literature review) components. A set of real LLNL email data was obtained for this study, consisting of several years worth of unfiltered traffic sent to a selection of addresses at ciac.org. The email data was subjected to three interrelated analyses: a textual study of the header data and subject matter, an examination of threats present in message attachments, and a characterization of the maliciousness of embedded URLs.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji Zhengfeng; Feng Yuan; Ying Mingsheng

    Local quantum operations and classical communication (LOCC) put considerable constraints on many quantum information processing tasks such as cloning and discrimination. Surprisingly, however, discrimination of any two pure states survives such constraints in some sense. We show that cloning is not that lucky; namely, probabilistic LOCC cloning of two product states is strictly less efficient than global cloning. We prove our result by giving explicitly the efficiency formula of local cloning of any two product states.

  15. PROBABILISTIC PROGRAMMING FOR ADVANCED MACHINE LEARNING (PPAML) DISCRIMINATIVE LEARNING FOR GENERATIVE TASKS (DILIGENT)

    DTIC Science & Technology

    2017-11-29

    Structural connections of the frames (fragments) in the knowledge. We call the fundamental elements of the knowledge a limited number of elements...the result of contracted fundamental research deemed exempt from public affairs security and policy review in accordance with SAF/AQR memorandum dated...AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited. This report is the result of contracted fundamental research deemed exempt from

  16. Functionally Graded Designer Viscoelastic Materials Tailored to Perform Prescribed Tasks with Probabilistic Failures and Lifetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilton, Harry H.

    Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.

  17. Probabilistic Tracklet Characterization and Prioritization Using Admissible Regions

    DTIC Science & Technology

    2014-09-01

    of deter- mining the potential threat of the object and obtaining further measurements. The solution to this problem is confounded in scenarios with...association and track initiation tasks. Well before their use in data association for asteroids and SOs, admissible regions have been used in stochastic...logic resource management.14 Milani et al.15 first proposed using ARs to assist in the optical detection and discrimination of asteroids . This work is

  18. The influence of number line estimation precision and numeracy on risky financial decision making.

    PubMed

    Park, Inkyung; Cho, Soohyun

    2018-01-10

    This study examined whether different aspects of mathematical proficiency influence one's ability to make adaptive financial decisions. "Numeracy" refers to the ability to process numerical and probabilistic information and is commonly reported as an important factor which contributes to financial decision-making ability. The precision of mental number representation (MNR), measured with the number line estimation (NLE) task has been reported to be another critical factor. This study aimed to examine the contribution of these mathematical proficiencies while controlling for the influence of fluid intelligence, math anxiety and personality factors. In our decision-making task, participants chose between two options offering probabilistic monetary gain or loss. Sensitivity to expected value was measured as an index for the ability to discriminate between optimal versus suboptimal options. Partial correlation and hierarchical regression analyses revealed that NLE precision better explained EV sensitivity compared to numeracy, after controlling for all covariates. These results suggest that individuals with more precise MNR are capable of making more rational financial decisions. We also propose that the measurement of "numeracy," which is commonly used interchangeably with general mathematical proficiency, should include more diverse aspects of mathematical cognition including basic understanding of number magnitude. © 2018 International Union of Psychological Science.

  19. Weakly Supervised Dictionary Learning

    NASA Astrophysics Data System (ADS)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  20. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  1. Nonlinear probabilistic finite element models of laminated composite shells

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Reddy, J. N.

    1993-01-01

    A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.

  2. Probabilistic Analysis of Gas Turbine Field Performance

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.

    2002-01-01

    A gas turbine thermodynamic cycle was computationally simulated and probabilistically evaluated in view of the several uncertainties in the performance parameters, which are indices of gas turbine health. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design, enhance performance, increase system availability and make it cost effective. The analysis leads to the selection of the appropriate measurements to be used in the gas turbine health determination and to the identification of both the most critical measurements and parameters. Probabilistic analysis aims at unifying and improving the control and health monitoring of gas turbine aero-engines by increasing the quality and quantity of information available about the engine's health and performance.

  3. Probabilistic failure assessment with application to solid rocket motors

    NASA Technical Reports Server (NTRS)

    Jan, Darrell L.; Davidson, Barry D.; Moore, Nicholas R.

    1990-01-01

    A quantitative methodology is being developed for assessment of risk of failure of solid rocket motors. This probabilistic methodology employs best available engineering models and available information in a stochastic framework. The framework accounts for incomplete knowledge of governing parameters, intrinsic variability, and failure model specification error. Earlier case studies have been conducted on several failure modes of the Space Shuttle Main Engine. Work in progress on application of this probabilistic approach to large solid rocket boosters such as the Advanced Solid Rocket Motor for the Space Shuttle is described. Failure due to debonding has been selected as the first case study for large solid rocket motors (SRMs) since it accounts for a significant number of historical SRM failures. Impact of incomplete knowledge of governing parameters and failure model specification errors is expected to be important.

  4. A computational and neural model of momentary subjective well-being

    PubMed Central

    Rutledge, Robb B.; Skandali, Nikolina; Dayan, Peter; Dolan, Raymond J.

    2014-01-01

    The subjective well-being or happiness of individuals is an important metric for societies. Although happiness is influenced by life circumstances and population demographics such as wealth, we know little about how the cumulative influence of daily life events are aggregated into subjective feelings. Using computational modeling, we show that emotional reactivity in the form of momentary happiness in response to outcomes of a probabilistic reward task is explained not by current task earnings, but by the combined influence of recent reward expectations and prediction errors arising from those expectations. The robustness of this account was evident in a large-scale replication involving 18,420 participants. Using functional MRI, we show that the very same influences account for task-dependent striatal activity in a manner akin to the influences underpinning changes in happiness. PMID:25092308

  5. Chronic cocaine but not chronic amphetamine use is associated with perseverative responding in humans

    PubMed Central

    Roiser, Jonathan P.; Robbins, Trevor W.; Sahakian, Barbara J.

    2013-01-01

    Rationale Chronic drug use has been associated with increased impulsivity and maladaptive behaviour, but the underlying mechanisms of this impairment remain unclear. We investigated the ability to adapt behaviour according to changes in reward contingencies, using a probabilistic reversal-learning task, in chronic drug users and controls. Materials and methods Five groups were compared: chronic amphetamine users (n = 30); chronic cocaine users (n = 27); chronic opiate users (n = 42); former drug users of psychostimulants and opiates (n = 26); and healthy non-drug-taking control volunteers (n = 25). Participants had to make a forced choice between two alternative stimuli on each trial to acquire a stimulus–reward association on the basis of degraded feedback and subsequently to reverse their responses when the reward contingencies changed. Results Chronic cocaine users demonstrated little behavioural change in response to the change in reward contingencies, as reflected by perseverative responding to the previously rewarded stimulus. Perseverative responding was observed in cocaine users regardless of whether they completed the reversal stage successfully. Task performance in chronic users of amphetamines and opiates, as well as in former drug users, was not measurably impaired. Conclusion Our findings provide convincing evidence for response perseveration in cocaine users during probabilistic reversal-learning. Pharmacological differences between amphetamine and cocaine, in particular their respective effects on the 5-HT system, may account for the divergent task performance between the two psychostimulant user groups. The inability to reverse responses according to changes in reinforcement contingencies may underlie the maladaptive behaviour patterns observed in chronic cocaine users but not in chronic users of amphetamines or opiates. PMID:18214445

  6. Sleep-Dependent Consolidation of Rewarded Behavior Is Diminished in Children with Attention Deficit Hyperactivity Disorder and a Comorbid Disorder of Social Behavior

    PubMed Central

    Wiesner, Christian D.; Molzow, Ina; Prehn-Kristensen, Alexander; Baving, Lioba

    2017-01-01

    Children suffering from attention-deficit hyperactivity disorder (ADHD) often also display impaired learning and memory. Previous research has documented aberrant reward processing in ADHD as well as impaired sleep-dependent consolidation of declarative memory. We investigated whether sleep also fosters the consolidation of behavior learned by probabilistic reward and whether ADHD patients with a comorbid disorder of social behavior show deficits in this memory domain, too. A group of 17 ADHD patients with comorbid disorders of social behavior aged 8–12 years and healthy controls matched for age, IQ, and handedness took part in the experiment. During the encoding task, children worked on a probabilistic learning task acquiring behavioral preferences for stimuli rewarded most often. After a 12-hr retention interval of either sleep at night or wakefulness during the day, a reversal task was presented where the contingencies were reversed. Consolidation of rewarded behavior is indicated by greater resistance to reversal learning. We found that healthy children consolidate rewarded behavior better during a night of sleep than during a day awake and that the sleep-dependent consolidation of rewarded behavior by trend correlates with non-REM sleep but not with REM sleep. In contrast, children with ADHD and comorbid disorders of social behavior do not show sleep-dependent consolidation of rewarded behavior. Moreover, their consolidation of rewarded behavior does not correlate with sleep. The results indicate that dysfunctional sleep in children suffering from ADHD and disorders of social behavior might be a crucial factor in the consolidation of behavior learned by reward. PMID:28228742

  7. Pathological Imitative Behavior and Response Preparation in Schizophrenia.

    PubMed

    Dankinas, Denisas; Melynyte, Sigita; Siurkute, Aldona; Dapsys, Kastytis

    2017-08-01

    Pathological imitative behavior (ehopraxia) is occasionally observed in schizophrenia patients. However, only a severe form of echopraxia can be detected with the help of a direct observation. Therefore, our goal was to study a latent form of pathological imitative behavior in this disorder, which is indicated by an increase of imitative tendencies. In our study, 14 schizophrenia patients and 15 healthy subjects were employed in two tasks: (a) in an imitative task they had to copy a hand action seen on a screen; (b) in a counter-imitative task they had to make a different movement (which involves an inhibition of prepotent imitative tendency that is impaired in case of pathological imitative behavior). Imitative tendencies were assessed by an interference score - a difference between counter-imitative and imitative response parameters. We also studied a response preparation in both groups by employing precueing probabilistic information. Our results revealed that schizophrenia patients were able to employ probabilistic information to prepare properly not only the imitative, but also the counter-imitative responses, the same as the healthy subjects did. Nevertheless, we detected increased prepotent imitative tendencies in schizophrenia patients, what indicates the latent pathological imitative behavior in case of this disorder. The obtained results suggest that in the case of schizophrenia problems with pathological imitative behavior more likely occurred in executive rather than in the preparatory stage of response. Our findings can help to detect a latent echopraxia in schizophrenia patients that cannot be revealed by direct observation. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Probabilistic reward- and punishment-based learning in opioid addiction: Experimental and computational data.

    PubMed

    Myers, Catherine E; Sheynin, Jony; Balsdon, Tarryn; Luzardo, Andre; Beck, Kevin D; Hogarth, Lee; Haber, Paul; Moustafa, Ahmed A

    2016-01-01

    Addiction is the continuation of a habit in spite of negative consequences. A vast literature gives evidence that this poor decision-making behavior in individuals addicted to drugs also generalizes to laboratory decision making tasks, suggesting that the impairment in decision-making is not limited to decisions about taking drugs. In the current experiment, opioid-addicted individuals and matched controls with no history of illicit drug use were administered a probabilistic classification task that embeds both reward-based and punishment-based learning trials, and a computational model of decision making was applied to understand the mechanisms describing individuals' performance on the task. Although behavioral results showed that opioid-addicted individuals performed as well as controls on both reward- and punishment-based learning, the modeling results suggested subtle differences in how decisions were made between the two groups. Specifically, the opioid-addicted group showed decreased tendency to repeat prior responses, meaning that they were more likely to "chase reward" when expectancies were violated, whereas controls were more likely to stick with a previously-successful response rule, despite occasional expectancy violations. This tendency to chase short-term reward, potentially at the expense of developing rules that maximize reward over the long term, may be a contributing factor to opioid addiction. Further work is indicated to better understand whether this tendency arises as a result of brain changes in the wake of continued opioid use/abuse, or might be a pre-existing factor that may contribute to risk for addiction. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

    NASA Technical Reports Server (NTRS)

    Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick

    2009-01-01

    The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts as well as performing major probabilistic assessments used to support flight rationale and help establish program requirements. During 2008, the Analysis Group performed more than 70 assessments. Although all these assessments were important, some were instrumental in the decisionmaking processes for the Shuttle and Constellation Programs. Two of the more significant tasks were the Space Transportation System (STS)-122 Low Level Cutoff PRA for the SSP and the Orion Pad Abort One (PA-1) PRA for the CxP. These two activities, along with the numerous other tasks the Analysis Group performed in 2008, are summarized in this report. This report also highlights several ongoing and upcoming efforts to provide crucial statistical and probabilistic assessments, such as the Extravehicular Activity (EVA) PRA for the Hubble Space Telescope service mission and the first fully integrated PRAs for the CxP's Lunar Sortie and ISS missions.

  10. [The research protocol III. Study population].

    PubMed

    Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.

  11. Working memory and reward association learning impairments in obesity.

    PubMed

    Coppin, Géraldine; Nolan-Poupart, Sarah; Jones-Gotman, Marilyn; Small, Dana M

    2014-12-01

    Obesity has been associated with impaired executive functions including working memory. Less explored is the influence of obesity on learning and memory. In the current study we assessed stimulus reward association learning, explicit learning and memory and working memory in healthy weight, overweight and obese individuals. Explicit learning and memory did not differ as a function of group. In contrast, working memory was significantly and similarly impaired in both overweight and obese individuals compared to the healthy weight group. In the first reward association learning task the obese, but not healthy weight or overweight participants consistently formed paradoxical preferences for a pattern associated with a negative outcome (fewer food rewards). To determine if the deficit was specific to food reward a second experiment was conducted using money. Consistent with Experiment 1, obese individuals selected the pattern associated with a negative outcome (fewer monetary rewards) more frequently than healthy weight individuals and thus failed to develop a significant preference for the most rewarded patterns as was observed in the healthy weight group. Finally, on a probabilistic learning task, obese compared to healthy weight individuals showed deficits in negative, but not positive outcome learning. Taken together, our results demonstrate deficits in working memory and stimulus reward learning in obesity and suggest that obese individuals are impaired in learning to avoid negative outcomes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. [Uncertainty characterization approaches for ecological risk assessment of polycyclic aromatic hydrocarbon in Taihu Lake].

    PubMed

    Guo, Guang-Hui; Wu, Feng-Chang; He, Hong-Ping; Feng, Cheng-Lian; Zhang, Rui-Qing; Li, Hui-Xian

    2012-04-01

    Probabilistic approaches, such as Monte Carlo Sampling (MCS) and Latin Hypercube Sampling (LHS), and non-probabilistic approaches, such as interval analysis, fuzzy set theory and variance propagation, were used to characterize uncertainties associated with risk assessment of sigma PAH8 in surface water of Taihu Lake. The results from MCS and LHS were represented by probability distributions of hazard quotients of sigma PAH8 in surface waters of Taihu Lake. The probabilistic distribution of hazard quotient were obtained from the results of MCS and LHS based on probabilistic theory, which indicated that the confidence intervals of hazard quotient at 90% confidence level were in the range of 0.000 18-0.89 and 0.000 17-0.92, with the mean of 0.37 and 0.35, respectively. In addition, the probabilities that the hazard quotients from MCS and LHS exceed the threshold of 1 were 9.71% and 9.68%, respectively. The sensitivity analysis suggested the toxicity data contributed the most to the resulting distribution of quotients. The hazard quotient of sigma PAH8 to aquatic organisms ranged from 0.000 17 to 0.99 using interval analysis. The confidence interval was (0.001 5, 0.016 3) at the 90% confidence level calculated using fuzzy set theory, and the confidence interval was (0.000 16, 0.88) at the 90% confidence level based on the variance propagation. These results indicated that the ecological risk of sigma PAH8 to aquatic organisms were low. Each method has its own set of advantages and limitations, which was based on different theory; therefore, the appropriate method should be selected on a case-by-case to quantify the effects of uncertainties on the ecological risk assessment. Approach based on the probabilistic theory was selected as the most appropriate method to assess the risk of sigma PAH8 in surface water of Taihu Lake, which provided an important scientific foundation of risk management and control for organic pollutants in water.

  13. Bayesian accounts of covert selective attention: A tutorial review.

    PubMed

    Vincent, Benjamin T

    2015-05-01

    Decision making and optimal observer models offer an important theoretical approach to the study of covert selective attention. While their probabilistic formulation allows quantitative comparison to human performance, the models can be complex and their insights are not always immediately apparent. Part 1 establishes the theoretical appeal of the Bayesian approach, and introduces the way in which probabilistic approaches can be applied to covert search paradigms. Part 2 presents novel formulations of Bayesian models of 4 important covert attention paradigms, illustrating optimal observer predictions over a range of experimental manipulations. Graphical model notation is used to present models in an accessible way and Supplementary Code is provided to help bridge the gap between model theory and practical implementation. Part 3 reviews a large body of empirical and modelling evidence showing that many experimental phenomena in the domain of covert selective attention are a set of by-products. These effects emerge as the result of observers conducting Bayesian inference with noisy sensory observations, prior expectations, and knowledge of the generative structure of the stimulus environment.

  14. The influence of trial order on learning from reward vs. punishment in a probabilistic categorization task: experimental and computational analyses.

    PubMed

    Moustafa, Ahmed A; Gluck, Mark A; Herzallah, Mohammad M; Myers, Catherine E

    2015-01-01

    Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward-first and punishment-first conditions on the relative weighting of neutral feedback. Specifically, early training on reward-based trials led to omission of reward being treated as similar to punishment, but prior training on punishment-based trials led to omission of reward being treated more neutrally. This suggests that early training on one type of trials, specifically reward-based trials, can create a bias in how neutral feedback is processed, relative to those receiving early punishment-based training or training that mixes positive and negative outcomes.

  15. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  16. bnstruct: an R package for Bayesian Network structure learning in the presence of missing data.

    PubMed

    Franzin, Alberto; Sambo, Francesco; Di Camillo, Barbara

    2017-04-15

    A Bayesian Network is a probabilistic graphical model that encodes probabilistic dependencies between a set of random variables. We introduce bnstruct, an open source R package to (i) learn the structure and the parameters of a Bayesian Network from data in the presence of missing values and (ii) perform reasoning and inference on the learned Bayesian Networks. To the best of our knowledge, there is no other open source software that provides methods for all of these tasks, particularly the manipulation of missing data, which is a common situation in practice. The software is implemented in R and C and is available on CRAN under a GPL licence. francesco.sambo@unipd.it. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    PubMed

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  18. Learning classification with auxiliary probabilistic information

    PubMed Central

    Nguyen, Quang; Valizadegan, Hamed; Hauskrecht, Milos

    2012-01-01

    Finding ways of incorporating auxiliary information or auxiliary data into the learning process has been the topic of active data mining and machine learning research in recent years. In this work we study and develop a new framework for classification learning problem in which, in addition to class labels, the learner is provided with an auxiliary (probabilistic) information that reflects how strong the expert feels about the class label. This approach can be extremely useful for many practical classification tasks that rely on subjective label assessment and where the cost of acquiring additional auxiliary information is negligible when compared to the cost of the example analysis and labelling. We develop classification algorithms capable of using the auxiliary information to make the learning process more efficient in terms of the sample complexity. We demonstrate the benefit of the approach on a number of synthetic and real world data sets by comparing it to the learning with class labels only. PMID:25309141

  19. ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.

    PubMed

    Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer

    2018-06-26

    Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.

  20. CARES/Life Software for Designing More Reliable Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.

  1. Probabilistic retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  2. Probabilistic Seeking Prediction in P2P VoD Systems

    NASA Astrophysics Data System (ADS)

    Wang, Weiwei; Xu, Tianyin; Gao, Yang; Lu, Sanglu

    In P2P VoD streaming systems, user behavior modeling is critical to help optimise user experience as well as system throughput. However, it still remains a challenging task due to the dynamic characteristics of user viewing behavior. In this paper, we consider the problem of user seeking prediction which is to predict the user's next seeking position so that the system can proactively make response. We present a novel method for solving this problem. In our method, frequent sequential patterns mining is first performed to extract abstract states which are not overlapped and cover the whole video file altogether. After mapping the raw training dataset to state transitions according to the abstract states, we use a simpel probabilistic contingency table to build the prediction model. We design an experiment on the synthetic P2P VoD dataset. The results demonstrate the effectiveness of our method.

  3. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1991-01-01

    The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.

  4. Ecohydrology of agroecosystems: probabilistic description of yield reduction risk under limited water availability

    NASA Astrophysics Data System (ADS)

    Vico, Giulia; Porporato, Amilcare

    2013-04-01

    Supplemental irrigation represents one of the main strategies to mitigate the effects of climate variability and stabilize yields. Irrigated agriculture currently provides 40% of food production and its relevance is expected to further increase in the near future, in face of the projected alterations of rainfall patterns and increase in food, fiber, and biofuel demand. Because of the significant investments and water requirements involved in irrigation, strategic choices are needed to preserve productivity and profitability, while maintaining a sustainable water management - a nontrivial task given the unpredictability of the rainfall forcing. To facilitate decision making under uncertainty, a widely applicable probabilistic framework is proposed. The occurrence of rainfall events and irrigation applications are linked probabilistically to crop development during the growing season and yields at harvest. Based on these linkages, the probability density function of yields and corresponding probability density function of required irrigation volumes, as well as the probability density function of yields under the most common case of limited water availability are obtained analytically, as a function of irrigation strategy, climate, soil and crop parameters. The full probabilistic description of the frequency of occurrence of yields and water requirements is a crucial tool for decision making under uncertainty, e.g., via expected utility analysis. Furthermore, the knowledge of the probability density function of yield allows us to quantify the yield reduction hydrologic risk. Two risk indices are defined and quantified: the long-term risk index, suitable for long-term irrigation strategy assessment and investment planning, and the real-time risk index, providing a rigorous probabilistic quantification of the emergence of drought conditions during a single growing season in an agricultural setting. Our approach employs relatively few parameters and is thus easily and broadly applicable to different crops and sites, under current and future climate scenarios. Hence, the proposed probabilistic framework provides a quantitative tool to assess the impact of irrigation strategy and water allocation on the risk of not meeting a certain target yield, thus guiding the optimal allocation of water resources for human and environmental needs.

  5. Delusional Ideation, Cognitive Processes and Crime Based Reasoning.

    PubMed

    Wilkinson, Dean J; Caulfield, Laura S

    2017-08-01

    Probabilistic reasoning biases have been widely associated with levels of delusional belief ideation (Galbraith, Manktelow, & Morris, 2010; Lincoln, Ziegler, Mehl, & Rief, 2010; Speechley, Whitman, & Woodward, 2010; White & Mansell, 2009), however, little research has focused on biases occurring during every day reasoning (Galbraith, Manktelow, & Morris, 2011), and moral and crime based reasoning (Wilkinson, Caulfield, & Jones, 2014; Wilkinson, Jones, & Caulfield, 2011). 235 participants were recruited across four experiments exploring crime based reasoning through different modalities and dual processing tasks. Study one explored delusional ideation when completing a visually presented crime based reasoning task. Study two explored the same task in an auditory presentation. Study three utilised a dual task paradigm to explore modality and executive functioning. Study four extended this paradigm to the auditory modality. The results indicated that modality and delusional ideation have a significant effect on individuals reasoning about violent and non-violent crime (p < .05), which could have implication for the presentation of evidence in applied setting such as the courtroom.

  6. Delusional Ideation, Cognitive Processes and Crime Based Reasoning

    PubMed Central

    Wilkinson, Dean J.; Caulfield, Laura S.

    2017-01-01

    Probabilistic reasoning biases have been widely associated with levels of delusional belief ideation (Galbraith, Manktelow, & Morris, 2010; Lincoln, Ziegler, Mehl, & Rief, 2010; Speechley, Whitman, & Woodward, 2010; White & Mansell, 2009), however, little research has focused on biases occurring during every day reasoning (Galbraith, Manktelow, & Morris, 2011), and moral and crime based reasoning (Wilkinson, Caulfield, & Jones, 2014; Wilkinson, Jones, & Caulfield, 2011). 235 participants were recruited across four experiments exploring crime based reasoning through different modalities and dual processing tasks. Study one explored delusional ideation when completing a visually presented crime based reasoning task. Study two explored the same task in an auditory presentation. Study three utilised a dual task paradigm to explore modality and executive functioning. Study four extended this paradigm to the auditory modality. The results indicated that modality and delusional ideation have a significant effect on individuals reasoning about violent and non-violent crime (p < .05), which could have implication for the presentation of evidence in applied setting such as the courtroom. PMID:28904598

  7. Prefrontal cortex dysfunction and 'Jumping to Conclusions': bias or deficit?

    PubMed

    Lunt, Laura; Bramham, Jessica; Morris, Robin G; Bullock, Peter R; Selway, Richard P; Xenitidis, Kiriakos; David, Anthony S

    2012-03-01

    The 'beads task' is used to measure the cognitive basis of delusions, namely the 'Jumping to Conclusions' (JTC) reasoning bias. However, it is not clear whether the task merely taps executive dysfunction - known to be impaired in patients with schizophrenia - such as planning and resistance to impulse. To study this, 19 individuals with neurosurgical excisions to the prefrontal cortex, 21 unmedicated adults with Attention Deficit Hyperactivity Disorder (ADHD), and 25 healthy controls completed two conditions of the beads task, in addition to tests of memory and executive function as well as control tests of probabilistic reasoning ability. The results indicated that the prefrontal lobe group (in particular, those with left-sided lesions) demonstrated a JTC bias relative to the ADHD and control groups. Further exploratory analyses indicated that JTC on the beads task was associated with poorer performance in certain executive domains. The results are discussed in terms of the executive demands of the beads task and possible implications for the model of psychotic delusions based on the JTC bias. ©2011 The British Psychological Society.

  8. Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling

    DTIC Science & Technology

    2008-01-01

    Morgan Kaufmann, 1988. [24] J. Pearl, Causality: Models, Reasoning, and Inference, Cambridge University Press, 2000. [25] J. Piaget , Piaget’s theory ...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES...AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM( S ) 11. SPONSOR/MONITOR’S REPORT NUMBER( S ) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved

  9. Future Modelling and Simulation Challenges (Defis futurs pour la modelisation et la simulation)

    DTIC Science & Technology

    2002-11-01

    Language School Figure 2: Location of the simulation center within the MEC Military operations research section - simulation lab Military operations... language . This logic can be probabilistic (branching is randomised, which is useful for modelling error), tactical (a branch goes to the task with the... language and a collection of simulation tools that can be used to create human and team behaviour models to meet users’ needs. Hence, different ways of

  10. Probabilistic Fretting Fatigue Life Prediction of Ti-6Al-4V (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    Ceramics & NDE Division Harry R. Millwater and Xiaobin Yang University of Texas at San Antonio JANUARY 2010 Approved for...Patrick J. Golden (AFRL/RXLM) Harry R. Millwater and Xiaobin Yang (University of Texas at San Antonio) 5d. PROJECT NUMBER 4347 5e. TASK NUMBER...Patterson AFB, OH 45433, USA Harry R. Millwater and Xiaobin Yang University of Texas at San Antonio, San Antonio, TX 78249, USA Abstract A

  11. Towards a General-Purpose Belief Maintenance System.

    DTIC Science & Technology

    1987-04-01

    reason using normal two or three-valued logic or using probabilistic values to represent partial belief. The design of the Belief Maintenance System is...as simply a generalization of Truth Maintenance Systems. whose possible reasoning tasks are a superset of those for a TMS. 2. DESIGN The design of...become support links in that they provide partial evidence in favor of a node. The basic design consists of three parts: (1) the conceptual control

  12. Using Response Times for Item Selection in Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2008-01-01

    Response times on items can be used to improve item selection in adaptive testing provided that a probabilistic model for their distribution is available. In this research, the author used a hierarchical modeling framework with separate first-level models for the responses and response times and a second-level model for the distribution of the…

  13. A Selective Role for Dopamine in Learning to Maximize Reward But Not to Minimize Effort: Evidence from Patients with Parkinson's Disease.

    PubMed

    Skvortsova, Vasilisa; Degos, Bertrand; Welter, Marie-Laure; Vidailhet, Marie; Pessiglione, Mathias

    2017-06-21

    Instrumental learning is a fundamental process through which agents optimize their choices, taking into account various dimensions of available options such as the possible reward or punishment outcomes and the costs associated with potential actions. Although the implication of dopamine in learning from choice outcomes is well established, less is known about its role in learning the action costs such as effort. Here, we tested the ability of patients with Parkinson's disease (PD) to maximize monetary rewards and minimize physical efforts in a probabilistic instrumental learning task. The implication of dopamine was assessed by comparing performance ON and OFF prodopaminergic medication. In a first sample of PD patients ( n = 15), we observed that reward learning, but not effort learning, was selectively impaired in the absence of treatment, with a significant interaction between learning condition (reward vs effort) and medication status (OFF vs ON). These results were replicated in a second, independent sample of PD patients ( n = 20) using a simplified version of the task. According to Bayesian model selection, the best account for medication effects in both studies was a specific amplification of reward magnitude in a Q-learning algorithm. These results suggest that learning to avoid physical effort is independent from dopaminergic circuits and strengthen the general idea that dopaminergic signaling amplifies the effects of reward expectation or obtainment on instrumental behavior. SIGNIFICANCE STATEMENT Theoretically, maximizing reward and minimizing effort could involve the same computations and therefore rely on the same brain circuits. Here, we tested whether dopamine, a key component of reward-related circuitry, is also implicated in effort learning. We found that patients suffering from dopamine depletion due to Parkinson's disease were selectively impaired in reward learning, but not effort learning. Moreover, anti-parkinsonian medication restored the ability to maximize reward, but had no effect on effort minimization. This dissociation suggests that the brain has evolved separate, domain-specific systems for instrumental learning. These results help to disambiguate the motivational role of prodopaminergic medications: they amplify the impact of reward without affecting the integration of effort cost. Copyright © 2017 the authors 0270-6474/17/376087-11$15.00/0.

  14. The strategic control of prospective memory monitoring in response to complex and probabilistic contextual cues.

    PubMed

    Bugg, Julie M; Ball, B Hunter

    2017-07-01

    Participants use simple contextual cues to reduce deployment of costly monitoring processes in contexts in which prospective memory (PM) targets are not expected. This study investigated whether this strategic monitoring pattern is observed in response to complex and probabilistic contextual cues. Participants performed a lexical decision task in which words or nonwords were presented in upper or lower locations on screen. The specific condition was informed that PM targets ("tor" syllable) would occur only in words in the upper location, whereas the nonspecific condition was informed that targets could occur in any location or word type. Context was blocked such that word type and location changed every 8 trials. In Experiment 1, the specific condition used the complex contextual cue to reduce monitoring in unexpected contexts relative to the nonspecific condition. This pattern largely was not evidenced when the complex contextual cue was probabilistic (Experiment 2). Experiment 3 confirmed that strategic monitoring is observed for a complex cue that is deterministic, but not one that is probabilistic. Additionally, Experiments 1 and 3 demonstrated a disadvantage associated with strategic monitoring-namely, that the specific condition was less likely to respond to a PM target in an unexpected context. Experiment 3 provided evidence that this disadvantage is attributable to impaired noticing of the target. The novel findings suggest use of a complex contextual cue per se is not a boundary condition for the strategic, context-specific allocation of monitoring processes to support prospective remembering; however, strategic monitoring is constrained by the predictive utility of the complex contextual cue.

  15. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  16. A computational framework to empower probabilistic protein design

    PubMed Central

    Fromer, Menachem; Yanover, Chen

    2008-01-01

    Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717

  17. Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.

    PubMed

    Costello, Fintan; Watts, Paul

    2018-01-01

    We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.

  18. Irrelevant reward and selection histories have different influences on task-relevant attentional selection.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-07-01

    Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.

  19. A Framework for Probabilistic Evaluation of Interval Management Tolerance in the Terminal Radar Control Area

    NASA Technical Reports Server (NTRS)

    Hercencia-Zapana, Heber; Herencia-Zapana, Heber; Hagen, George E.; Neogi, Natasha

    2012-01-01

    Projections of future traffic in the national airspace show that most of the hub airports and their attendant airspace will need to undergo significant redevelopment and redesign in order to accommodate any significant increase in traffic volume. Even though closely spaced parallel approaches increase throughput into a given airport, controller workload in oversubscribed metroplexes is further taxed by these approaches that require stringent monitoring in a saturated environment. The interval management (IM) concept in the TRACON area is designed to shift some of the operational burden from the control tower to the flight deck, placing the flight crew in charge of implementing the required speed changes to maintain a relative spacing interval. The interval management tolerance is a measure of the allowable deviation from the desired spacing interval for the IM aircraft (and its target aircraft). For this complex task, Formal Methods can help to ensure better design and system implementation. In this paper, we propose a probabilistic framework to quantify the uncertainty and performance associated with the major components of the IM tolerance. The analytical basis for this framework may be used to formalize both correctness and probabilistic system safety claims in a modular fashion at the algorithmic level in a way compatible with several Formal Methods tools.

  20. The ticking time bomb: Using eye-tracking methodology to capture attentional processing during gradual time constraints.

    PubMed

    Franco-Watkins, Ana M; Davis, Matthew E; Johnson, Joseph G

    2016-11-01

    Many decisions are made under suboptimal circumstances, such as time constraints. We examined how different experiences of time constraints affected decision strategies on a probabilistic inference task and whether individual differences in working memory accounted for complex strategy use across different levels of time. To examine information search and attentional processing, we used an interactive eye-tracking paradigm where task information was occluded and only revealed by an eye fixation to a given cell. Our results indicate that although participants change search strategies during the most restricted times, the occurrence of the shift in strategies depends both on how the constraints are applied as well as individual differences in working memory. This suggests that, in situations that require making decisions under time constraints, one can influence performance by being sensitive to working memory and, potentially, by acclimating people to the task time gradually.

  1. [Exposure to whole body vibrations in workers moving heavy items by mechanical vehicles in the warehouse of a large retail outlet].

    PubMed

    Siciliano, E; Rossi, A; Nori, L

    2007-01-01

    Efficient warehouse management and item transportation is of fundamental importance in the commercial outlet in exam. Whole body vibrations have been measured in various types of machines, some of which not widely studied yet, like the electrical pallet truck. In some tasks (fork lifts drivers) vibrations propagate through the driving seat whereas in some other tasks (electrical pallet trucks, stackers), operated in a standing posture, vibrations propagate through the lower limbs. Results have been provided for a homogeneous job tasks. In particular conditions, the action level of the Italian national (and European) regulations on occupational exposure to WBV may be exceeded. The authors propose a simple system of probabilistic classification of the risk of exposure to whole body vibrations, based on the respective areas of the distribution which lay within the three risk classes.

  2. Design flood estimation in ungauged basins: probabilistic extension of the design-storm concept

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Špačková, Olga; Straub, Daniel

    2016-04-01

    Design flood estimation in ungauged basins is an important hydrological task, which is in engineering practice typically solved with the design storm concept. However, neglecting the uncertainty in the hydrological response of the catchment through the assumption of average-recurrence-interval (ARI) neutrality between rainfall and runoff can lead to flawed design flood estimates. Additionally, selecting a single critical rainfall duration neglects the contribution of other rainfall durations on the probability of extreme flood events. In this study, the design flood problem is approached with concepts from structural reliability that enable a consistent treatment of multiple uncertainties in estimating the design flood. The uncertainty of key model parameters are represented probabilistically and the First-Order Reliability Method (FORM) is used to compute the flood exceedance probability. As an important by-product, the FORM analysis provides the most likely parameter combination to lead to a flood with a certain exceedance probability; i.e. it enables one to find representative scenarios for e.g., a 100 year or a 1000 year flood. Possible different rainfall durations are incorporated by formulating the event of a given design flood as a series system. The method is directly applicable in practice, since for the description of the rainfall depth-duration characteristics, the same inputs as for the classical design storm methods are needed, which are commonly provided by meteorological services. The proposed methodology is applied to a case study of Trauchgauer Ach catchment in Bavaria, SCS Curve Number (CN) and Unit hydrograph models are used for modeling the hydrological process. The results indicate, in accordance with past experience, that the traditional design storm concept underestimates design floods.

  3. Judgment under uncertainty; a probabilistic evaluation framework for decision-making about sanitation systems in low-income countries.

    PubMed

    Malekpour, Shirin; Langeveld, Jeroen; Letema, Sammy; Clemens, François; van Lier, Jules B

    2013-03-30

    This paper introduces the probabilistic evaluation framework, to enable transparent and objective decision-making in technology selection for sanitation solutions in low-income countries. The probabilistic framework recognizes the often poor quality of the available data for evaluations. Within this framework, the evaluations will be done based on the probabilities that the expected outcomes occur in practice, considering the uncertainties in evaluation parameters. Consequently, the outcome of evaluations will not be single point estimates; but there exists a range of possible outcomes. A first trial application of this framework for evaluation of sanitation options in the Nyalenda settlement in Kisumu, Kenya, showed how the range of values that an evaluation parameter may obtain in practice would influence the evaluation outcomes. In addition, as the probabilistic evaluation requires various site-specific data, sensitivity analysis was performed to determine the influence of each data set quality on the evaluation outcomes. Based on that, data collection activities could be (re)directed, in a trade-off between the required investments in those activities and the resolution of the decisions that are to be made. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  5. "iSS-Hyb-mRMR": Identification of splicing sites using hybrid space of pseudo trinucleotide and pseudo tetranucleotide composition.

    PubMed

    Iqbal, Muhammad; Hayat, Maqsood

    2016-05-01

    Gene splicing is a vital source of protein diversity. Perfectly eradication of introns and joining exons is the prominent task in eukaryotic gene expression, as exons are usually interrupted by introns. Identification of splicing sites through experimental techniques is complicated and time-consuming task. With the avalanche of genome sequences generated in the post genomic age, it remains a complicated and challenging task to develop an automatic, robust and reliable computational method for fast and effective identification of splicing sites. In this study, a hybrid model "iSS-Hyb-mRMR" is proposed for quickly and accurately identification of splicing sites. Two sample representation methods namely; pseudo trinucleotide composition (PseTNC) and pseudo tetranucleotide composition (PseTetraNC) were used to extract numerical descriptors from DNA sequences. Hybrid model was developed by concatenating PseTNC and PseTetraNC. In order to select high discriminative features, minimum redundancy maximum relevance algorithm was applied on the hybrid feature space. The performance of these feature representation methods was tested using various classification algorithms including K-nearest neighbor, probabilistic neural network, general regression neural network, and fitting network. Jackknife test was used for evaluation of its performance on two benchmark datasets S1 and S2, respectively. The predictor, proposed in the current study achieved an accuracy of 93.26%, sensitivity of 88.77%, and specificity of 97.78% for S1, and the accuracy of 94.12%, sensitivity of 87.14%, and specificity of 98.64% for S2, respectively. It is observed, that the performance of proposed model is higher than the existing methods in the literature so for; and will be fruitful in the mechanism of RNA splicing, and other research academia. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Domain-general sequence learning deficit in specific language impairment.

    PubMed

    Lukács, Agnes; Kemény, Ferenc

    2014-05-01

    Grammar-specific accounts of specific language impairment (SLI) have been challenged by recent claims that language problems are a consequence of impairments in domain-general mechanisms of learning that also play a key role in the process of language acquisition. Our studies were designed to test the generality and nature of this learning deficit by focusing on both sequential and nonsequential, and on verbal and nonverbal, domains. Twenty-nine children with SLI were compared with age-matched typically developing (TD) control children using (a) a serial reaction time task (SRT), testing the learning of motor sequences; (b) an artificial grammar learning (AGL) task, testing the extraction of regularities from auditory sequences; and (c) a weather prediction task (WP), testing probabilistic category learning in a nonsequential task. For the 2 sequence learning tasks, a significantly smaller proportion of children showed evidence of learning in the SLI than in the TD group (χ2 tests, p < .001 for the SRT task, p < .05 for the AGL task), whereas the proportion of learners on the WP task was the same in the 2 groups. The level of learning for SLI learners was comparable with that of TD children on all tasks (with great individual variation). Taken together, these findings suggest that domain-general processes of implicit sequence learning tend to be impaired in SLI. Further research is needed to clarify the relationship of deficits in implicit learning and language.

  7. Alcohol Increases Delay and Probability Discounting of Condom-Protected Sex: A Novel Vector for Alcohol-Related HIV Transmission.

    PubMed

    Johnson, Patrick S; Sweeney, Mary M; Herrmann, Evan S; Johnson, Matthew W

    2016-06-01

    Alcohol use, especially at binge levels, is associated with sexual HIV risk behavior, but the mechanisms through which alcohol increases sexual risk taking are not well-examined. Delay discounting, that is, devaluation of future consequences as a function of delay to their occurrence, has been implicated in a variety of problem behaviors, including risky sexual behavior. Probability discounting is studied with a similar framework as delay discounting, but is a distinct process in which a consequence is devalued because it is uncertain or probabilistic. Twenty-three, nondependent alcohol users (13 male, 10 female; mean age = 25.3 years old) orally consumed alcohol (1 g/kg) or placebo in 2 separate experimental sessions. During sessions, participants completed tasks examining delay and probability discounting of hypothetical condom-protected sex (Sexual Delay Discounting Task, Sexual Probability Discounting Task) and of hypothetical and real money. Alcohol decreased the likelihood that participants would wait to have condom-protected sex versus having immediate, unprotected sex. Alcohol also decreased the likelihood that participants would use an immediately available condom given a specified level of sexually transmitted infection (STI) risk. Alcohol did not affect delay discounting of money, but it did increase participants' preferences for larger, probabilistic monetary rewards over smaller, certain rewards. Acute, binge-level alcohol intoxication may increase sexual HIV risk by decreasing willingness to delay sex in order to acquire a condom in situations where one is not immediately available, and by decreasing sensitivity to perceived risk of STI contraction. These findings suggest that delay and probability discounting are critical, but heretofore unrecognized, processes that may mediate the relations between alcohol use and HIV risk. Copyright © 2016 by the Research Society on Alcoholism.

  8. Probabilistic classification learning with corrective feedback is associated with in vivo striatal dopamine release in the ventral striatum, while learning without feedback is not

    PubMed Central

    Wilkinson, Leonora; Tai, Yen Foung; Lin, Chia Shu; Lagnado, David Albert; Brooks, David James; Piccini, Paola; Jahanshahi, Marjan

    2014-01-01

    The basal ganglia (BG) mediate certain types of procedural learning, such as probabilistic classification learning on the ‘weather prediction task’ (WPT). Patients with Parkinson's disease (PD), who have BG dysfunction, are impaired at WPT-learning, but it remains unclear what component of the WPT is important for learning to occur. We tested the hypothesis that learning through processing of corrective feedback is the essential component and is associated with release of striatal dopamine. We employed two WPT paradigms, either involving learning via processing of corrective feedback (FB) or in a paired associate manner (PA). To test the prediction that learning on the FB but not PA paradigm would be associated with dopamine release in the striatum, we used serial 11C-raclopride (RAC) positron emission tomography (PET), to investigate striatal dopamine release during FB and PA WPT-learning in healthy individuals. Two groups, FB, (n = 7) and PA (n = 8), underwent RAC PET twice, once while performing the WPT and once during a control task. Based on a region-of-interest approach, striatal RAC-binding potentials reduced by 13–17% in the right ventral striatum when performing the FB compared to control task, indicating release of synaptic dopamine. In contrast, right ventral striatal RAC binding non-significantly increased by 9% during the PA task. While differences between the FB and PA versions of the WPT in effort and decision-making is also relevant, we conclude striatal dopamine is released during FB-based WPT-learning, implicating the striatum and its dopamine connections in mediating learning with FB. PMID:24777947

  9. Quantitative analysis of task selection for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Llera, Alberto; Gómez, Vicenç; Kappen, Hilbert J.

    2014-10-01

    Objective. To assess quantitatively the impact of task selection in the performance of brain-computer interfaces (BCI). Approach. We consider the task-pairs derived from multi-class BCI imagery movement tasks in three different datasets. We analyze for the first time the benefits of task selection on a large-scale basis (109 users) and evaluate the possibility of transferring task-pair information across days for a given subject. Main results. Selecting the subject-dependent optimal task-pair among three different imagery movement tasks results in approximately 20% potential increase in the number of users that can be expected to control a binary BCI. The improvement is observed with respect to the best task-pair fixed across subjects. The best task-pair selected for each subject individually during a first day of recordings is generally a good task-pair in subsequent days. In general, task learning from the user side has a positive influence in the generalization of the optimal task-pair, but special attention should be given to inexperienced subjects. Significance. These results add significant evidence to existing literature that advocates task selection as a necessary step towards usable BCIs. This contribution motivates further research focused on deriving adaptive methods for task selection on larger sets of mental tasks in practical online scenarios.

  10. NESTOR: A Computer-Based Medical Diagnostic Aid That Integrates Causal and Probabilistic Knowledge.

    DTIC Science & Technology

    1984-11-01

    indiidual conditional probabilities between one cause node and its effect node, but less common to know a joint conditional probability between a...PERFOAMING ORG. REPORT NUMBER * 7. AUTI4ORs) O Gregory F. Cooper 1 CONTRACT OR GRANT NUMBERIa) ONR N00014-81-K-0004 g PERFORMING ORGANIZATION NAME AND...ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK Department of Computer Science AREA & WORK UNIT NUMBERS Stanford University Stanford, CA 94305 USA 12. REPORT

  11. DYT1 dystonia increases risk taking in humans.

    PubMed

    Arkadir, David; Radulescu, Angela; Raymond, Deborah; Lubarr, Naomi; Bressman, Susan B; Mazzoni, Pietro; Niv, Yael

    2016-06-01

    It has been difficult to link synaptic modification to overt behavioral changes. Rodent models of DYT1 dystonia, a motor disorder caused by a single gene mutation, demonstrate increased long-term potentiation and decreased long-term depression in corticostriatal synapses. Computationally, such asymmetric learning predicts risk taking in probabilistic tasks. Here we demonstrate abnormal risk taking in DYT1 dystonia patients, which is correlated with disease severity, thereby supporting striatal plasticity in shaping choice behavior in humans.

  12. Neurodynamical model of collective brain

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1992-01-01

    A dynamical system which mimics collective purposeful activities of a set of units of intelligence is introduced and discussed. A global control of the unit activities is replaced by the probabilistic correlations between them. These correlations are learned during a long term period of performing collective tasks, and are stored in the synaptic interconnections. The model is represented by a system of ordinary differential equations with terminal attractors and repellers, and does not contain any man-made digital devices.

  13. Learning to Obtain Reward, but Not Avoid Punishment, Is Affected by Presence of PTSD Symptoms in Male Veterans: Empirical Data and Computational Model

    DTIC Science & Technology

    2013-08-27

    University of New Jersey, Newark, New Jersey, United States of America, 3 Department of Psychology , Rutgers, The State University of New Jersey...United States of America, 5 Marcs Institute for Brain and Behaviour & School of Social Sciences and Psychology , University of Western Sydney, Sydney...for current, severe PTSD symptoms (PTSS) were tested on a probabilistic classification task [19] that interleaves reward learning and punishment

  14. Modeling Syntax for Parsing and Translation

    DTIC Science & Technology

    2003-12-15

    20 CHAPTER 2. MONOLINGUAL PROBABILISTIC PARSING a the D cat snake D S O chased S O ran SS Mary O Figure 2.1: Part of a dictionary . the cat S chased S O...along with their training algorithms: a monolingual gen- erative model of sentence structure, and a model of the relationship between the structure of a...tasks of monolingual parsing and word-level bilingual corpus alignment, they are demonstrated in two additional applications. First, a new statistical

  15. Quantitative knowledge acquisition for expert systems

    NASA Technical Reports Server (NTRS)

    Belkin, Brenda L.; Stengel, Robert F.

    1991-01-01

    A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. While it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a direct task. A statistical method is presented for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. The systematic development is described of a Navigation Sensor Management (NSM) Expert System from Kalman Filter convariance data. The method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 Algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithms identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. Both are detailed.

  16. Decision making with uncertain reinforcement in children with attention deficit/hyperactivity disorder (ADHD).

    PubMed

    Drechsler, Renate; Rizzo, Patrizia; Steinhausen, Hans-Christoph

    2010-01-01

    Reward-related processes are impaired in children with ADHD. Whether these deficits can be ascribed to an aversion to delay or to an altered responsiveness to magnitude, frequency, valence, or the probability of rewards still needs to be explored. In the present study, children with ADHD and normal controls aged 7 to 10 years performed a simple probabilistic discounting task. They had to choose between alternatives where the magnitude of rewards was inversely related to the probability of outcomes. As a result, children with ADHD opted more frequently for less likely but larger rewards than normal controls. Shifts of the response category after positive or negative feedback, however, occurred as often in children with ADHD as in control children. In children with ADHD, the frequency of risky choices was correlated with neuropsychological measures of response time variability but unrelated to measures of inhibitory control. It is concluded that the tendency to select less likely but larger rewards possibly represents a separate facet of dysfunctional reward processing, independent of delay aversion or altered responsiveness to feedback.

  17. Potential advantages associated with implementing a risk-based inspection program by a nuclear facility

    NASA Astrophysics Data System (ADS)

    McNeill, Alexander, III; Balkey, Kenneth R.

    1995-05-01

    The current inservice inspection activities at a U.S. nuclear facility are based upon the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Section XI. The Code selects examination locations based upon a sampling criteria which includes component geometry, stress, and usage among other criteria. This can result in a significant number of required examinations. As a result of regulatory action each nuclear facility has conducted probabilistic risk assessments (PRA) or individual plant examinations (IPE), producing plant specific risk-based information. Several initiatives have been introduced to apply this new plant risk information. Among these initiatives is risk-based inservice inspection. A code case has been introduced for piping inspections based upon this new risk- based technology. This effort brought forward to the ASME Section XI Code committee, has been initiated and championed by the ASME Research Task Force on Risk-Based Inspection Guidelines -- LWR Nuclear Power Plant Application. Preliminary assessments associated with the code case have revealed that potential advantages exist in a risk-based inservice inspection program with regard to a number of exams, risk, personnel exposure, and cost.

  18. Neural network and wavelet average framing percentage energy for atrial fibrillation classification.

    PubMed

    Daqrouq, K; Alkhateeb, A; Ajour, M N; Morfeq, A

    2014-03-01

    ECG signals are an important source of information in the diagnosis of atrial conduction pathology. Nevertheless, diagnosis by visual inspection is a difficult task. This work introduces a novel wavelet feature extraction method for atrial fibrillation derived from the average framing percentage energy (AFE) of terminal wavelet packet transform (WPT) sub signals. Probabilistic neural network (PNN) is used for classification. The presented method is shown to be a potentially effective discriminator in an automated diagnostic process. The ECG signals taken from the MIT-BIH database are used to classify different arrhythmias together with normal ECG. Several published methods were investigated for comparison. The best recognition rate selection was obtained for AFE. The classification performance achieved accuracy 97.92%. It was also suggested to analyze the presented system in an additive white Gaussian noise (AWGN) environment; 55.14% for 0dB and 92.53% for 5dB. It was concluded that the proposed approach of automating classification is worth pursuing with larger samples to validate and extend the present study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Conservative forgetful scholars: How people learn causal structure through sequences of interventions.

    PubMed

    Bramley, Neil R; Lagnado, David A; Speekenbrink, Maarten

    2015-05-01

    Interacting with a system is key to uncovering its causal structure. A computational framework for interventional causal learning has been developed over the last decade, but how real causal learners might achieve or approximate the computations entailed by this framework is still poorly understood. Here we describe an interactive computer task in which participants were incentivized to learn the structure of probabilistic causal systems through free selection of multiple interventions. We develop models of participants' intervention choices and online structure judgments, using expected utility gain, probability gain, and information gain and introducing plausible memory and processing constraints. We find that successful participants are best described by a model that acts to maximize information (rather than expected score or probability of being correct); that forgets much of the evidence received in earlier trials; but that mitigates this by being conservative, preferring structures consistent with earlier stated beliefs. We explore 2 heuristics that partly explain how participants might be approximating these models without explicitly representing or updating a hypothesis space. (c) 2015 APA, all rights reserved).

  20. Rethinking volitional control over task choice in multitask environments: use of a stimulus set selection strategy in voluntary task switching.

    PubMed

    Arrington, Catherine M; Weaver, Starla M

    2015-01-01

    Under conditions of volitional control in multitask environments, subjects may engage in a variety of strategies to guide task selection. The current research examines whether subjects may sometimes use a top-down control strategy of selecting a task-irrelevant stimulus dimension, such as location, to guide task selection. We term this approach a stimulus set selection strategy. Using a voluntary task switching procedure, subjects voluntarily switched between categorizing letter and number stimuli that appeared in two, four, or eight possible target locations. Effects of stimulus availability, manipulated by varying the stimulus onset asynchrony between the two target stimuli, and location repetition were analysed to assess the use of a stimulus set selection strategy. Considered across position condition, Experiment 1 showed effects of both stimulus availability and location repetition on task choice suggesting that only in the 2-position condition, where selection based on location always results in a target at the selected location, subjects may have been using a stimulus set selection strategy on some trials. Experiment 2 replicated and extended these findings in a visually more cluttered environment. These results indicate that, contrary to current models of task selection in voluntary task switching, the top-down control of task selection may occur in the absence of the formation of an intention to perform a particular task.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  2. The New Italian Seismic Hazard Model

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.

    2017-12-01

    In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme of the different components of the PSHA model that has been built through three different independent steps: a formal experts' elicitation, the outcomes of the testing phase, and the correlation between the outcomes. Finally, we explore through different techniques the influence on seismic hazard of the declustering procedure.

  3. Attention and implicit memory in the category-verification and lexical decision tasks.

    PubMed

    Mulligan, Neil W; Peterson, Daniel

    2008-05-01

    Prior research on implicit memory appeared to support 3 generalizations: Conceptual tests are affected by divided attention, perceptual tasks are affected by certain divided-attention manipulations, and all types of priming are affected by selective attention. These generalizations are challenged in experiments using the implicit tests of category verification and lexical decision. First, both tasks were unaffected by divided-attention tasks known to impact other priming tasks. Second, both tasks were unaffected by a manipulation of selective attention in which colored words were either named or their colors identified. Thus, category verification, unlike other conceptual tasks, appears unaffected by divided attention, and some selective-attention tasks, and lexical decision, unlike other perceptual tasks, appears unaffected by a difficult divided-attention task and some selective-attention tasks. Finally, both tasks were affected by a selective-attention task in which attention was manipulated across objects (rather than within objects), indicating some susceptibility to selective attention. The results contradict an analysis on the basis of the conceptual-perceptual distinction and other more specific hypotheses but are consistent with the distinction between production and identification priming.

  4. On problems of analyzing aerodynamic properties of blunted rotary bodies with small random surface distortions under supersonic and hypersonic flows

    NASA Astrophysics Data System (ADS)

    Degtyar, V. G.; Kalashnikov, S. T.; Mokin, Yu. A.

    2017-10-01

    The paper considers problems of analyzing aerodynamic properties (ADP) of reenetry vehicles (RV) as blunted rotary bodies with small random surface distortions. The interactions of math simulation of surface distortions, selection of tools for predicting ADPs of shaped bodies, evaluation of different-type ADP variations and their adaptation for dynamic problems are analyzed. The possibilities of deterministic and probabilistic approaches to evaluation of ADP variations are considered. The practical value of the probabilistic approach is demonstrated. The examples of extremal deterministic evaluations of ADP variations for a sphere and a sharp cone are given.

  5. Classifying Cognitive Profiles Using Machine Learning with Privileged Information in Mild Cognitive Impairment.

    PubMed

    Alahmadi, Hanin H; Shen, Yuan; Fouad, Shereen; Luft, Caroline Di B; Bentham, Peter; Kourtzi, Zoe; Tino, Peter

    2016-01-01

    Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalized Matrix Learning Vector Quantization (GMLVQ) classifiers to discriminate patients with Mild Cognitive Impairment (MCI) from healthy controls based on their cognitive skills. Further, we adopted a "Learning with privileged information" approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI) during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants. MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on a probabilistic sequence learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1) when overall fMRI signal is used as inputs to the classifier, the post-training session is most relevant; and (2) when the graph feature reflecting underlying spatiotemporal fMRI pattern is used, the pre-training session is most relevant. Taken together these results suggest that brain connectivity before training and overall fMRI signal after training are both diagnostic of cognitive skills in MCI.

  6. Probabilistic inference under time pressure leads to a cortical-to-subcortical shift in decision evidence integration.

    PubMed

    Oh-Descher, Hanna; Beck, Jeffrey M; Ferrari, Silvia; Sommer, Marc A; Egner, Tobias

    2017-11-15

    Real-life decision-making often involves combining multiple probabilistic sources of information under finite time and cognitive resources. To mitigate these pressures, people "satisfice", foregoing a full evaluation of all available evidence to focus on a subset of cues that allow for fast and "good-enough" decisions. Although this form of decision-making likely mediates many of our everyday choices, very little is known about the way in which the neural encoding of cue information changes when we satisfice under time pressure. Here, we combined human functional magnetic resonance imaging (fMRI) with a probabilistic classification task to characterize neural substrates of multi-cue decision-making under low (1500 ms) and high (500 ms) time pressure. Using variational Bayesian inference, we analyzed participants' choices to track and quantify cue usage under each experimental condition, which was then applied to model the fMRI data. Under low time pressure, participants performed near-optimally, appropriately integrating all available cues to guide choices. Both cortical (prefrontal and parietal cortex) and subcortical (hippocampal and striatal) regions encoded individual cue weights, and activity linearly tracked trial-by-trial variations in the amount of evidence and decision uncertainty. Under increased time pressure, participants adaptively shifted to using a satisficing strategy by discounting the least informative cue in their decision process. This strategic change in decision-making was associated with an increased involvement of the dopaminergic midbrain, striatum, thalamus, and cerebellum in representing and integrating cue values. We conclude that satisficing the probabilistic inference process under time pressure leads to a cortical-to-subcortical shift in the neural drivers of decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Games people play: How video games improve probabilistic learning.

    PubMed

    Schenk, Sabrina; Lech, Robert K; Suchan, Boris

    2017-09-29

    Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The influence of trial order on learning from reward vs. punishment in a probabilistic categorization task: experimental and computational analyses

    PubMed Central

    Moustafa, Ahmed A.; Gluck, Mark A.; Herzallah, Mohammad M.; Myers, Catherine E.

    2015-01-01

    Previous research has shown that trial ordering affects cognitive performance, but this has not been tested using category-learning tasks that differentiate learning from reward and punishment. Here, we tested two groups of healthy young adults using a probabilistic category learning task of reward and punishment in which there are two types of trials (reward, punishment) and three possible outcomes: (1) positive feedback for correct responses in reward trials; (2) negative feedback for incorrect responses in punishment trials; and (3) no feedback for incorrect answers in reward trials and correct answers in punishment trials. Hence, trials without feedback are ambiguous, and may represent either successful avoidance of punishment or failure to obtain reward. In Experiment 1, the first group of subjects received an intermixed task in which reward and punishment trials were presented in the same block, as a standard baseline task. In Experiment 2, a second group completed the separated task, in which reward and punishment trials were presented in separate blocks. Additionally, in order to understand the mechanisms underlying performance in the experimental conditions, we fit individual data using a Q-learning model. Results from Experiment 1 show that subjects who completed the intermixed task paradoxically valued the no-feedback outcome as a reinforcer when it occurred on reinforcement-based trials, and as a punisher when it occurred on punishment-based trials. This is supported by patterns of empirical responding, where subjects showed more win-stay behavior following an explicit reward than following an omission of punishment, and more lose-shift behavior following an explicit punisher than following an omission of reward. In Experiment 2, results showed similar performance whether subjects received reward-based or punishment-based trials first. However, when the Q-learning model was applied to these data, there were differences between subjects in the reward-first and punishment-first conditions on the relative weighting of neutral feedback. Specifically, early training on reward-based trials led to omission of reward being treated as similar to punishment, but prior training on punishment-based trials led to omission of reward being treated more neutrally. This suggests that early training on one type of trials, specifically reward-based trials, can create a bias in how neutral feedback is processed, relative to those receiving early punishment-based training or training that mixes positive and negative outcomes. PMID:26257616

  9. The Effects of Heuristics and Apophenia on Probabilistic Choice.

    PubMed

    Ellerby, Zack W; Tunney, Richard J

    2017-01-01

    Given a repeated choice between two or more options with independent and identically distributed reward probabilities, overall pay-offs can be maximized by the exclusive selection of the option with the greatest likelihood of reward. The tendency to match response proportions to reward contingencies is suboptimal. Nevertheless, this behaviour is well documented. A number of explanatory accounts have been proposed for probability matching. These include failed pattern matching, driven by apophenia, and a heuristic-driven response that can be overruled with sufficient deliberation. We report two experiments that were designed to test the relative effects on choice behaviour of both an intuitive versus strategic approach to the task and belief that there was a predictable pattern in the reward sequence, through a combination of both direct experimental manipulation and post-experimental self-report. Mediation analysis was used to model the pathways of effects. Neither of two attempted experimental manipulations of apophenia, nor self-reported levels of apophenia, had a significant effect on proportions of maximizing choices. However, the use of strategy over intuition proved a consistent predictor of maximizing, across all experimental conditions. A parallel analysis was conducted to assess the effect of controlling for individual variance in perceptions of reward contingencies. Although this analysis suggested that apophenia did increase probability matching in the standard task preparation, this effect was found to result from an unforeseen relationship between self-reported apophenia and perceived reward probabilities. A Win-Stay Lose-Shift (WSLS ) analysis indicated no reliable relationship between WSLS and either intuition or strategy use.

  10. Default network connectivity reflects the level of consciousness in non-communicative brain-damaged patients

    PubMed Central

    Vanhaudenhuyse, Audrey; Noirhomme, Quentin; Tshibanda, Luaba J.-F.; Bruno, Marie-Aurelie; Boveroux, Pierre; Schnakers, Caroline; Soddu, Andrea; Perlbarg, Vincent; Ledoux, Didier; Brichant, Jean-François; Moonen, Gustave; Maquet, Pierre; Greicius, Michael D.

    2010-01-01

    The ‘default network’ is defined as a set of areas, encompassing posterior-cingulate/precuneus, anterior cingulate/mesiofrontal cortex and temporo-parietal junctions, that show more activity at rest than during attention-demanding tasks. Recent studies have shown that it is possible to reliably identify this network in the absence of any task, by resting state functional magnetic resonance imaging connectivity analyses in healthy volunteers. However, the functional significance of these spontaneous brain activity fluctuations remains unclear. The aim of this study was to test if the integrity of this resting-state connectivity pattern in the default network would differ in different pathological alterations of consciousness. Fourteen non-communicative brain-damaged patients and 14 healthy controls participated in the study. Connectivity was investigated using probabilistic independent component analysis, and an automated template-matching component selection approach. Connectivity in all default network areas was found to be negatively correlated with the degree of clinical consciousness impairment, ranging from healthy controls and locked-in syndrome to minimally conscious, vegetative then coma patients. Furthermore, precuneus connectivity was found to be significantly stronger in minimally conscious patients as compared with unconscious patients. Locked-in syndrome patient’s default network connectivity was not significantly different from controls. Our results show that default network connectivity is decreased in severely brain-damaged patients, in proportion to their degree of consciousness impairment. Future prospective studies in a larger patient population are needed in order to evaluate the prognostic value of the presented methodology. PMID:20034928

  11. Seismic probabilistic tsunami hazard: from regional to local analysis and use of geological and historical observations

    NASA Astrophysics Data System (ADS)

    Tonini, R.; Lorito, S.; Orefice, S.; Graziani, L.; Brizuela, B.; Smedile, A.; Volpe, M.; Romano, F.; De Martini, P. M.; Maramai, A.; Selva, J.; Piatanesi, A.; Pantosti, D.

    2016-12-01

    Site-specific probabilistic tsunami hazard analyses demand very high computational efforts that are often reduced by introducing approximations on tsunami sources and/or tsunami modeling. On one hand, the large variability of source parameters implies the definition of a huge number of potential tsunami scenarios, whose omission could easily lead to important bias in the analysis. On the other hand, detailed inundation maps computed by tsunami numerical simulations require very long running time. When tsunami effects are calculated at regional scale, a common practice is to propagate tsunami waves in deep waters (up to 50-100 m depth) neglecting non-linear effects and using coarse bathymetric meshes. Then, maximum wave heights on the coast are empirically extrapolated, saving a significant amount of computational time. However, moving to local scale, such assumptions drop out and tsunami modeling would require much greater computational resources. In this work, we perform a local Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) for the 50 km long coastal segment between Augusta and Siracusa, a touristic and commercial area placed along the South-Eastern Sicily coast, Italy. The procedure consists in using the outcomes of a regional SPTHA as input for a two-step filtering method to select and substantially reduce the number of scenarios contributing to the specific target area. These selected scenarios are modeled using high resolution topo-bathymetry for producing detailed inundation maps. Results are presented as probabilistic hazard curves and maps, with the goal of analyze, compare and highlight the different results provided by regional and local hazard assessments. Moreover, the analysis is enriched by the use of local observed tsunami data, both geological and historical. Indeed, tsunami data-sets available for the selected target areas are particularly rich with respect to the scarce and heterogeneous data-sets usually available elsewhere. Therefore, they can represent valuable benchmarks for testing and strengthening the results of such kind of studies. The work is funded by the Italian Flagship Project RITMARE, the two EC FP7 projects ASTARTE (Grant agreement 603839) and STREST (Grant agreement 603389), and the INGV-DPC Agreement.

  12. Probabilistic Deviation Detection and Optimal Thresholds

    DTIC Science & Technology

    2014-01-01

    27 List of Figures Figure 1: A screenshot of the StarCraft Brood War videogame ...War videogame StarCraft is used as the domain for the case-based planning research conducted in the DEEP project. StarCraft was selected for a number

  13. Bayesian Action-Perception loop modeling: Application to trajectory generation and recognition using internal motor simulation

    NASA Astrophysics Data System (ADS)

    Gilet, Estelle; Diard, Julien; Palluel-Germain, Richard; Bessière, Pierre

    2011-03-01

    This paper is about modeling perception-action loops and, more precisely, the study of the influence of motor knowledge during perception tasks. We use the Bayesian Action-Perception (BAP) model, which deals with the sensorimotor loop involved in reading and writing cursive isolated letters and includes an internal simulation of movement loop. By using this probabilistic model we simulate letter recognition, both with and without internal motor simulation. Comparison of their performance yields an experimental prediction, which we set forth.

  14. Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data

    DTIC Science & Technology

    2014-11-01

    NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING...we can apply our results to problems of attrition in which missingness is a severe obstacle to sound inferences. Related works are discussed in...due to the collider path between Y and Ry ). 8 Related Work Deletion based methods such as listwise deletion that are easy to understand as well as

  15. Exploring methodological frameworks for a mental task-based near-infrared spectroscopy brain-computer interface.

    PubMed

    Weyand, Sabine; Takehara-Nishiuchi, Kaori; Chau, Tom

    2015-10-30

    Near-infrared spectroscopy (NIRS) brain-computer interfaces (BCIs) enable users to interact with their environment using only cognitive activities. This paper presents the results of a comparison of four methodological frameworks used to select a pair of tasks to control a binary NIRS-BCI; specifically, three novel personalized task paradigms and the state-of-the-art prescribed task framework were explored. Three types of personalized task selection approaches were compared, including: user-selected mental tasks using weighted slope scores (WS-scores), user-selected mental tasks using pair-wise accuracy rankings (PWAR), and researcher-selected mental tasks using PWAR. These paradigms, along with the state-of-the-art prescribed mental task framework, where mental tasks are selected based on the most commonly used tasks in literature, were tested by ten able-bodied participants who took part in five NIRS-BCI sessions. The frameworks were compared in terms of their accuracy, perceived ease-of-use, computational time, user preference, and length of training. Most notably, researcher-selected personalized tasks resulted in significantly higher accuracies, while user-selected personalized tasks resulted in significantly higher perceived ease-of-use. It was also concluded that PWAR minimized the amount of data that needed to be collected; while, WS-scores maximized user satisfaction and minimized computational time. In comparison to the state-of-the-art prescribed mental tasks, our findings show that overall, personalized tasks appear to be superior to prescribed tasks with respect to accuracy and perceived ease-of-use. The deployment of personalized rather than prescribed mental tasks ought to be considered and further investigated in future NIRS-BCI studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tie, S. S.; Martini, P.; Mudd, D.

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  17. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE PAGES

    Tie, S. S.; Martini, P.; Mudd, D.; ...

    2017-02-15

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  18. HIV+ Men and Women Show Different Performance Patterns on Procedural Learning Tasks

    PubMed Central

    Martin, Eileen; Gonzalez, Raul; Vassileva, Jasmin; Maki, Pauline

    2010-01-01

    The literature suggests that nondeclarative, or nonconscious, learning might be impaired among HIV+ individuals compared with HIV− matched control groups, but these studies have included relatively few women. We administered measures of motor skill and probabilistic learning, tasks with a nondeclarative or procedural learning component that are dependent on integrity of prefrontal-striatal systems, to well-matched groups of 148 men and 65 women with a history of substance dependence that included 45 men and 30 women seropositive for HIV. All participants were abstinent at testing. Compared to HIV− women, HIV+ women performed significantly more poorly on both tasks, but HIV+ men’s performance did not differ significantly compared to HIV− men on either task. These different patterns of performance indicate that features of HIV-associated neurocognitive disorder (HAND) can not always be generalized from men to women. Additional studies are needed to address directly the possibility of sex differences in HIV-associated neurocognitive disorder (HAND) and the possibility that women might be more vulnerable to the effects of HIV and substance dependence on some neurocognitive functions. PMID:20694870

  19. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    NASA Astrophysics Data System (ADS)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  20. Bayesian-information-gap decision theory with an application to CO 2 sequestration

    DOE PAGES

    O'Malley, D.; Vesselinov, V. V.

    2015-09-04

    Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less

  1. Effects of lesions of the nucleus accumbens core on choice between small certain rewards and large uncertain rewards in rats

    PubMed Central

    Cardinal, Rudolf N; Howes, Nathan J

    2005-01-01

    Background Animals must frequently make choices between alternative courses of action, seeking to maximize the benefit obtained. They must therefore evaluate the magnitude and the likelihood of the available outcomes. Little is known of the neural basis of this process, or what might predispose individuals to be overly conservative or to take risks excessively (avoiding or preferring uncertainty, respectively). The nucleus accumbens core (AcbC) is known to contribute to rats' ability to choose large, delayed rewards over small, immediate rewards; AcbC lesions cause impulsive choice and an impairment in learning with delayed reinforcement. However, it is not known how the AcbC contributes to choice involving probabilistic reinforcement, such as between a large, uncertain reward and a small, certain reward. We examined the effects of excitotoxic lesions of the AcbC on probabilistic choice in rats. Results Rats chose between a single food pellet delivered with certainty (p = 1) and four food pellets delivered with varying degrees of uncertainty (p = 1, 0.5, 0.25, 0.125, and 0.0625) in a discrete-trial task, with the large-reinforcer probability decreasing or increasing across the session. Subjects were trained on this task and then received excitotoxic or sham lesions of the AcbC before being retested. After a transient period during which AcbC-lesioned rats exhibited relative indifference between the two alternatives compared to controls, AcbC-lesioned rats came to exhibit risk-averse choice, choosing the large reinforcer less often than controls when it was uncertain, to the extent that they obtained less food as a result. Rats behaved as if indifferent between a single certain pellet and four pellets at p = 0.32 (sham-operated) or at p = 0.70 (AcbC-lesioned) by the end of testing. When the probabilities did not vary across the session, AcbC-lesioned rats and controls strongly preferred the large reinforcer when it was certain, and strongly preferred the small reinforcer when the large reinforcer was very unlikely (p = 0.0625), with no differences between AcbC-lesioned and sham-operated groups. Conclusion These results support the view that the AcbC contributes to action selection by promoting the choice of uncertain, as well as delayed, reinforcement. PMID:15921529

  2. Bayesian networks and information theory for audio-visual perception modeling.

    PubMed

    Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis

    2010-09-01

    Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.

  3. Probabilistic peak detection for first-order chromatographic data.

    PubMed

    Lopatka, M; Vivó-Truyols, G; Sjerps, M J

    2014-03-19

    We present a novel algorithm for probabilistic peak detection in first-order chromatographic data. Unlike conventional methods that deliver a binary answer pertaining to the expected presence or absence of a chromatographic peak, our method calculates the probability of a point being affected by such a peak. The algorithm makes use of chromatographic information (i.e. the expected width of a single peak and the standard deviation of baseline noise). As prior information of the existence of a peak in a chromatographic run, we make use of the statistical overlap theory. We formulate an exhaustive set of mutually exclusive hypotheses concerning presence or absence of different peak configurations. These models are evaluated by fitting a segment of chromatographic data by least-squares. The evaluation of these competing hypotheses can be performed as a Bayesian inferential task. We outline the potential advantages of adopting this approach for peak detection and provide several examples of both improved performance and increased flexibility afforded by our approach. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A probabilistic, distributed, recursive mechanism for decision-making in the brain

    PubMed Central

    Gurney, Kevin N.

    2018-01-01

    Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. Here we characterize its essential composition, using as a framework a novel recursive Bayesian algorithm that makes decisions based on spike-trains with the statistics of those in sensory cortex (MT). Using it to simulate the random-dot-motion task, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of otherwise usable information from MT. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensorimotor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive, and parallel. Its success at capturing anatomy, behaviour, and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics. PMID:29614077

  5. On the distribution of saliency.

    PubMed

    Berengolts, Alexander; Lindenbaum, Michael

    2006-12-01

    Detecting salient structures is a basic task in perceptual organization. Saliency algorithms typically mark edge-points with some saliency measure, which grows with the length and smoothness of the curve on which these edge-points lie. Here, we propose a modified saliency estimation mechanism that is based on probabilistically specified grouping cues and on curve length distributions. In this framework, the Shashua and Ullman saliency mechanism may be interpreted as a process for detecting the curve with maximal expected length. Generalized types of saliency naturally follow. We propose several specific generalizations (e.g., gray-level-based saliency) and rigorously derive the limitations on generalized saliency types. We then carry out a probabilistic analysis of expected length saliencies. Using ergodicity and asymptotic analysis, we derive the saliency distributions associated with the main curves and with the rest of the image. We then extend this analysis to finite-length curves. Using the derived distributions, we derive the optimal threshold on the saliency for discriminating between figure and background and bound the saliency-based figure-from-ground performance.

  6. Data Analysis with Graphical Models: Software Tools

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1994-01-01

    Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  7. Probabilistic risk analysis and terrorism risk.

    PubMed

    Ezell, Barry Charles; Bennett, Steven P; von Winterfeldt, Detlof; Sokolowski, John; Collins, Andrew J

    2010-04-01

    Since the terrorist attacks of September 11, 2001, and the subsequent establishment of the U.S. Department of Homeland Security (DHS), considerable efforts have been made to estimate the risks of terrorism and the cost effectiveness of security policies to reduce these risks. DHS, industry, and the academic risk analysis communities have all invested heavily in the development of tools and approaches that can assist decisionmakers in effectively allocating limited resources across the vast array of potential investments that could mitigate risks from terrorism and other threats to the homeland. Decisionmakers demand models, analyses, and decision support that are useful for this task and based on the state of the art. Since terrorism risk analysis is new, no single method is likely to meet this challenge. In this article we explore a number of existing and potential approaches for terrorism risk analysis, focusing particularly on recent discussions regarding the applicability of probabilistic and decision analytic approaches to bioterrorism risks and the Bioterrorism Risk Assessment methodology used by the DHS and criticized by the National Academies and others.

  8. Cognitive Inflexibility in Gamblers is Primarily Present in Reward-Related Decision Making

    PubMed Central

    Boog, Michiel; Höppener, Paul; v. d. Wetering, Ben J. M.; Goudriaan, Anna E.; Boog, Matthijs C.; Franken, Ingmar H. A.

    2014-01-01

    One hallmark of gambling disorder (GD) is the observation that gamblers have problems stopping their gambling behavior once it is initiated. On a neuropsychological level, it has been hypothesized that this is the result of a cognitive inflexibility. The present study investigated cognitive inflexibility in patients with GD using a task involving cognitive inflexibility with a reward element (i.e., reversal learning) and a task measuring general cognitive inflexibility without such a component (i.e., response perseveration). For this purpose, scores of a reward-based reversal learning task (probabilistic reversal learning task) and the Wisconsin card sorting task were compared between a group of treatment seeking patients with GD and a gender and age matched control group. The results show that pathological gamblers have impaired performance on the neurocognitive task measuring reward-based cognitive inflexibility. However, no difference between the groups is observed regarding non-reward-based cognitive inflexibility. This suggests that cognitive inflexibility in GD is the result of an aberrant reward-based learning, and not based on a more general problem with cognitive flexibility. The pattern of observed problems is suggestive of a dysfunction of the orbitofrontal cortex, the ventrolateral prefrontal cortex, and the ventral regions of the striatum in gamblers. Relevance for the neurocognition of problematic gambling is discussed. PMID:25165438

  9. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature.

  10. Momentary Conscious Pairing Eliminates Unconscious-Stimulus Influences on Task Selection

    PubMed Central

    Zhou, Fanzhi Anita; Davis, Greg

    2012-01-01

    Task selection, previously thought to operate only under conscious, voluntary control, can be activated by unconsciously-perceived stimuli. In most cases, such activation is observed for unconscious stimuli that closely resemble other conscious, task-relevant stimuli and hence may simply reflect perceptual activation of consciously established stimulus-task associations. However, other studies have reported ‘direct’ unconscious-stimulus influences on task selection in the absence of any conscious, voluntary association between that stimulus and task (e.g., Zhou and Davis, 2012). In new experiments, described here, these latter influences on cued- and free-choice task selection appear robust and long-lived, yet, paradoxically, are suppressed to undetectable levels following momentary conscious prime-task pairing. Assessing, and rejecting, three intuitive explanations for such suppressive effects, we conclude that conscious prime-task pairing minimizes non-strategic influences of unconscious stimuli on task selection, insulating endogenous choice mechanisms from maladaptive external control. PMID:23050012

  11. Concerning Dice and Divinity

    NASA Astrophysics Data System (ADS)

    Appleby, D. M.

    2007-02-01

    Einstein initially objected to the probabilistic aspect of quantum mechanics—the idea that God is playing at dice. Later he changed his ground, and focussed instead on the point that the Copenhagen Interpretation leads to what Einstein saw as the abandonment of physical realism. We argue here that Einstein's initial intuition was perfectly sound, and that it is precisely the fact that quantum mechanics is a fundamentally probabilistic theory which is at the root of all the controversies regarding its interpretation. Probability is an intrinsically logical concept. This means that the quantum state has an essentially logical significance. It is extremely difficult to reconcile that fact with Einstein's belief, that it is the task of physics to give us a vision of the world apprehended sub specie aeternitatis. Quantum mechanics thus presents us with a simple choice: either to follow Einstein in looking for a theory which is not probabilistic at the fundamental level, or else to accept that physics does not in fact put us in the position of God looking down on things from above. There is a widespread fear that the latter alternative must inevitably lead to a greatly impoverished, positivistic view of physical theory. It appears to us, however, that the truth is just the opposite. The Einsteinian vision is much less attractive than it seems at first sight. In particular, it is closely connected with philosophical reductionism.

  12. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    PubMed

    Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  13. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning

    PubMed Central

    Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366

  14. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  15. Task frequency influences stimulus-driven effects on task selection during voluntary task switching.

    PubMed

    Arrington, Catherine M; Reiman, Kaitlin M

    2015-08-01

    Task selection during voluntary task switching involves both top-down (goal-directed) and bottom-up (stimulus-driven) mechanisms. The factors that shift the balance between these two mechanisms are not well characterized. In the present research, we studied the role that task frequency plays in determining the extent of stimulus-driven task selection. In two experiments, we used the basic paradigm adapted from Arrington (Memory & Cognition, 38, 991-997, 2008), in which the effect of stimulus availability serves as a marker of stimulus-driven task selection. A number and letter appeared on each trial with varying stimulus onset asynchronies, and participants performed either a consonant/vowel or an even/odd judgment. In Experiment 1, participants were instructed as to the relative frequency with which each task was to be performed (i.e., 50/50, 60/40, or 75/25) and were further instructed to make their transitions between tasks unpredictable. In Experiment 2, participants were given no instructions about how to select tasks, resulting in naturally occurring variation in task frequency. With both instructed (Exp. 1) and naturally occurring (Exp. 2) relative task frequencies, the less frequently performed task showed a greater effect of stimulus availability on task selection, suggestive of a larger influence of stimulus-driven mechanisms during task performance for the less frequent task. When goal-directed mechanisms of task choice are engaged less frequently, the relative influence of the stimulus environment increases.

  16. Use of Multichannel Near Infrared Spectroscopy to Study Relationships Between Brain Regions and Neurocognitive Tasks of Selective/Divided Attention and 2-Back Working Memory.

    PubMed

    Tomita, Nozomi; Imai, Shoji; Kanayama, Yusuke; Kawashima, Issaku; Kumano, Hiroaki

    2017-06-01

    While dichotic listening (DL) was originally intended to measure bottom-up selective attention, it has also become a tool for measuring top-down selective attention. This study investigated the brain regions related to top-down selective and divided attention DL tasks and a 2-back task using alphanumeric and Japanese numeric sounds. Thirty-six healthy participants underwent near-infrared spectroscopy scanning while performing a top-down selective attentional DL task, a top-down divided attentional DL task, and a 2-back task. Pearson's correlations were calculated to show relationships between oxy-Hb concentration in each brain region and the score of each cognitive task. Different brain regions were activated during the DL and 2-back tasks. Brain regions activated in the top-down selective attention DL task were the left inferior prefrontal gyrus and left pars opercularis. The left temporopolar area was activated in the top-down divided attention DL task, and the left frontopolar area and left dorsolateral prefrontal cortex were activated in the 2-back task. As further evidence for the finding that each task measured different cognitive and brain area functions, neither the percentages of correct answers for the three tasks nor the response times for the selective attentional task and the divided attentional task were correlated to one another. Thus, the DL and 2-back tasks used in this study can assess multiple areas of cognitive, brain-related dysfunction to explore their relationship to different psychiatric and neurodevelopmental disorders.

  17. Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments

    NASA Technical Reports Server (NTRS)

    Abbey, Craig K.; Eckstein, Miguel P.

    2002-01-01

    We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.

  18. General functioning predicts reward and punishment learning in schizophrenia.

    PubMed

    Somlai, Zsuzsanna; Moustafa, Ahmed A; Kéri, Szabolcs; Myers, Catherine E; Gluck, Mark A

    2011-04-01

    Previous studies investigating feedback-driven reinforcement learning in patients with schizophrenia have provided mixed results. In this study, we explored the clinical predictors of reward and punishment learning using a probabilistic classification learning task. Patients with schizophrenia (n=40) performed similarly to healthy controls (n=30) on the classification learning task. However, more severe negative and general symptoms were associated with lower reward-learning performance, whereas poorer general psychosocial functioning was correlated with both lower reward- and punishment-learning performances. Multiple linear regression analyses indicated that general psychosocial functioning was the only significant predictor of reinforcement learning performance when education, antipsychotic dose, and positive, negative and general symptoms were included in the analysis. These results suggest a close relationship between reinforcement learning and general psychosocial functioning in schizophrenia. Published by Elsevier B.V.

  19. What top-down task sets do for us: an ERP study on the benefits of advance preparation in visual search.

    PubMed

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-12-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features. Visual search arrays contained two different color singleton digits, and participants had to select one of these as target and report its parity. Target color was either known in advance (fixed color task) or had to be selected anew on each trial (free color-choice task). ERP correlates of spatially selective attentional target selection (N2pc) and working memory processing (SPCN) demonstrated rapid target selection and efficient exclusion of color singleton distractors from focal attention and working memory in the fixed color task. In the free color-choice task, spatially selective processing also emerged rapidly, but selection efficiency was reduced, with nontarget singleton digits capturing attention and gaining access to working memory. Results demonstrate the benefits of top-down task sets: Feature-specific advance preparation accelerates target selection, rapidly resolves attentional competition, and prevents irrelevant events from attracting attention and entering working memory.

  20. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    PubMed

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects. © 2018 John Wiley & Sons Ltd.

  1. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    USGS Publications Warehouse

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects.

  2. Habit learning and the genetics of the dopamine D3 receptor: evidence from patients with schizophrenia and healthy controls.

    PubMed

    Kéri, Szabolcs; Juhász, Anna; Rimanóczy, Agnes; Szekeres, György; Kelemen, Oguz; Cimmer, Csongor; Szendi, István; Benedek, György; Janka, Zoltán

    2005-06-01

    In this study, the authors investigated the relationship between the Ser9Gly (SG) polymorphism of the dopamine D3 receptor (DRD3) and striatal habit learning in healthy controls and patients with schizophrenia. Participants were given the weather prediction task, during which probabilistic cue-response associations were learned for tarot cards and weather outcomes (rain or sunshine). In both healthy controls and patients with schizophrenia, participants with Ser9Ser (SS) genotype did not learn during the early phase of the task (1-50 trials), whereas participants with SG genotype did so. During the late phase of the task (51-100 trials), both participants with SS and SG genotype exhibited significant learning. Learning rate was normal in patients with schizophrenia. These results suggest that the DRD3 variant containing glycine is associated with more efficient striatal habit learning in healthy controls and patients with schizophrenia. (c) 2005 APA, all rights reserved.

  3. Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters

    PubMed Central

    Gilet, Estelle; Diard, Julien; Bessière, Pierre

    2011-01-01

    In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043

  4. Optimal Symmetric Multimodal Templates and Concatenated Random Forests for Supervised Brain Tumor Segmentation (Simplified) with ANTsR.

    PubMed

    Tustison, Nicholas J; Shrinidhi, K L; Wintermark, Max; Durst, Christopher R; Kandel, Benjamin M; Gee, James C; Grossman, Murray C; Avants, Brian B

    2015-04-01

    Segmenting and quantifying gliomas from MRI is an important task for diagnosis, planning intervention, and for tracking tumor changes over time. However, this task is complicated by the lack of prior knowledge concerning tumor location, spatial extent, shape, possible displacement of normal tissue, and intensity signature. To accommodate such complications, we introduce a framework for supervised segmentation based on multiple modality intensity, geometry, and asymmetry feature sets. These features drive a supervised whole-brain and tumor segmentation approach based on random forest-derived probabilities. The asymmetry-related features (based on optimal symmetric multimodal templates) demonstrate excellent discriminative properties within this framework. We also gain performance by generating probability maps from random forest models and using these maps for a refining Markov random field regularized probabilistic segmentation. This strategy allows us to interface the supervised learning capabilities of the random forest model with regularized probabilistic segmentation using the recently developed ANTsR package--a comprehensive statistical and visualization interface between the popular Advanced Normalization Tools (ANTs) and the R statistical project. The reported algorithmic framework was the top-performing entry in the MICCAI 2013 Multimodal Brain Tumor Segmentation challenge. The challenge data were widely varying consisting of both high-grade and low-grade glioma tumor four-modality MRI from five different institutions. Average Dice overlap measures for the final algorithmic assessment were 0.87, 0.78, and 0.74 for "complete", "core", and "enhanced" tumor components, respectively.

  5. Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro

    2017-10-01

    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.

  6. Stochastic model for fatigue crack size and cost effective design decisions. [for aerospace structures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1975-01-01

    This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.

  7. Probabilistic Analysis of Solid Oxide Fuel Cell Based Hybrid Gas Turbine System

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.

    2003-01-01

    The emergence of fuel cell systems and hybrid fuel cell systems requires the evolution of analysis strategies for evaluating thermodynamic performance. A gas turbine thermodynamic cycle integrated with a fuel cell was computationally simulated and probabilistically evaluated in view of the several uncertainties in the thermodynamic performance parameters. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the uncertainties in the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design and make it cost effective. The analysis leads to the selection of criteria for gas turbine performance.

  8. Limits in decision making arise from limits in memory retrieval.

    PubMed

    Giguère, Gyslain; Love, Bradley C

    2013-05-07

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people's memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people's test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

  9. Limits in decision making arise from limits in memory retrieval

    PubMed Central

    Giguère, Gyslain; Love, Bradley C.

    2013-01-01

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers. PMID:23610402

  10. Development of Maximum Considered Earthquake Ground Motion Maps

    USGS Publications Warehouse

    Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.

    2000-01-01

    The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.

  11. Selective impairment of auditory selective attention under concurrent cognitive load.

    PubMed

    Dittrich, Kerstin; Stahl, Christoph

    2012-06-01

    Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.

  12. Oculomotor selection underlies feature retention in visual working memory.

    PubMed

    Hanning, Nina M; Jonikaitis, Donatas; Deubel, Heiner; Szinte, Martin

    2016-02-01

    Oculomotor selection, spatial task relevance, and visual working memory (WM) are described as three processes highly intertwined and sustained by similar cortical structures. However, because task-relevant locations always constitute potential saccade targets, no study so far has been able to distinguish between oculomotor selection and spatial task relevance. We designed an experiment that allowed us to dissociate in humans the contribution of task relevance, oculomotor selection, and oculomotor execution to the retention of feature representations in WM. We report that task relevance and oculomotor selection lead to dissociable effects on feature WM maintenance. In a first task, in which an object's location was encoded as a saccade target, its feature representations were successfully maintained in WM, whereas they declined at nonsaccade target locations. Likewise, we observed a similar WM benefit at the target of saccades that were prepared but never executed. In a second task, when an object's location was marked as task relevant but constituted a nonsaccade target (a location to avoid), feature representations maintained at that location did not benefit. Combined, our results demonstrate that oculomotor selection is consistently associated with WM, whereas task relevance is not. This provides evidence for an overlapping circuitry serving saccade target selection and feature-based WM that can be dissociated from processes encoding task-relevant locations. Copyright © 2016 the American Physiological Society.

  13. Analog-Based Postprocessing of Navigation-Related Hydrological Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Klein, B.

    2017-11-01

    Inland waterway transport benefits from probabilistic forecasts of water levels as they allow to optimize the ship load and, hence, to minimize the transport costs. Probabilistic state-of-the-art hydrologic ensemble forecasts inherit biases and dispersion errors from the atmospheric ensemble forecasts they are driven with. The use of statistical postprocessing techniques like ensemble model output statistics (EMOS) allows for a reduction of these systematic errors by fitting a statistical model based on training data. In this study, training periods for EMOS are selected based on forecast analogs, i.e., historical forecasts that are similar to the forecast to be verified. Due to the strong autocorrelation of water levels, forecast analogs have to be selected based on entire forecast hydrographs in order to guarantee similar hydrograph shapes. Custom-tailored measures of similarity for forecast hydrographs comprise hydrological series distance (SD), the hydrological matching algorithm (HMA), and dynamic time warping (DTW). Verification against observations reveals that EMOS forecasts for water level at three gauges along the river Rhine with training periods selected based on SD, HMA, and DTW compare favorably with reference EMOS forecasts, which are based on either seasonal training periods or on training periods obtained by dividing the hydrological forecast trajectories into runoff regimes.

  14. Training self-assessment and task-selection skills to foster self-regulated learning: Do trained skills transfer across domains?

    PubMed

    Raaijmakers, Steven F; Baars, Martine; Paas, Fred; van Merriënboer, Jeroen J G; van Gog, Tamara

    2018-01-01

    Students' ability to accurately self-assess their performance and select a suitable subsequent learning task in response is imperative for effective self-regulated learning. Video modeling examples have proven effective for training self-assessment and task-selection skills, and-importantly-such training fostered self-regulated learning outcomes. It is unclear, however, whether trained skills would transfer across domains. We investigated whether skills acquired from training with either a specific, algorithmic task-selection rule or a more general heuristic task-selection rule in biology would transfer to self-regulated learning in math. A manipulation check performed after the training confirmed that both algorithmic and heuristic training improved task-selection skills on the biology problems compared with the control condition. However, we found no evidence that students subsequently applied the acquired skills during self-regulated learning in math. Future research should investigate how to support transfer of task-selection skills across domains.

  15. Multi-task feature selection in microarray data by binary integer programming.

    PubMed

    Lan, Liang; Vucetic, Slobodan

    2013-12-20

    A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.

  16. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    NASA Astrophysics Data System (ADS)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression modeling does not resolve significant effects due to site class at frequencies greater than approximately 5 Hz. Disaggregation of general seismic hazard models using Vsbea indicates that the modal magnitudes for the higher frequency oscillators tend to be larger, and vary less with oscillator frequency, than those derived using PSV. Insofar as the elastic input energy may be a better parameter for quantifying the damage potential of ground motion, its use in probabilistic seismic hazard analysis could provide an improved means for selecting earthquake scenarios and establishing design earthquakes for many types of engineering analyses.

  17. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Njiki-Menga, G.-H.; Witschger, O.

    2013-04-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a quantitative estimation of the airborne particles released at the source when the task is performed. Beyond obtained results, this exploratory study indicates that the analysis of the results requires specific experience in statistics.

  18. Uncertainty characterization approaches for risk assessment of DBPs in drinking water: a review.

    PubMed

    Chowdhury, Shakhawat; Champagne, Pascale; McLellan, P James

    2009-04-01

    The management of risk from disinfection by-products (DBPs) in drinking water has become a critical issue over the last three decades. The areas of concern for risk management studies include (i) human health risk from DBPs, (ii) disinfection performance, (iii) technical feasibility (maintenance, management and operation) of treatment and disinfection approaches, and (iv) cost. Human health risk assessment is typically considered to be the most important phase of the risk-based decision-making or risk management studies. The factors associated with health risk assessment and other attributes are generally prone to considerable uncertainty. Probabilistic and non-probabilistic approaches have both been employed to characterize uncertainties associated with risk assessment. The probabilistic approaches include sampling-based methods (typically Monte Carlo simulation and stratified sampling) and asymptotic (approximate) reliability analysis (first- and second-order reliability methods). Non-probabilistic approaches include interval analysis, fuzzy set theory and possibility theory. However, it is generally accepted that no single method is suitable for the entire spectrum of problems encountered in uncertainty analyses for risk assessment. Each method has its own set of advantages and limitations. In this paper, the feasibility and limitations of different uncertainty analysis approaches are outlined for risk management studies of drinking water supply systems. The findings assist in the selection of suitable approaches for uncertainty analysis in risk management studies associated with DBPs and human health risk.

  19. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks.

    PubMed

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-02-08

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency.

  20. Probabilistic bias analysis in pharmacoepidemiology and comparative effectiveness research: a systematic review.

    PubMed

    Hunnicutt, Jacob N; Ulbricht, Christine M; Chrysanthopoulou, Stavroula A; Lapane, Kate L

    2016-12-01

    We systematically reviewed pharmacoepidemiologic and comparative effectiveness studies that use probabilistic bias analysis to quantify the effects of systematic error including confounding, misclassification, and selection bias on study results. We found articles published between 2010 and October 2015 through a citation search using Web of Science and Google Scholar and a keyword search using PubMed and Scopus. Eligibility of studies was assessed by one reviewer. Three reviewers independently abstracted data from eligible studies. Fifteen studies used probabilistic bias analysis and were eligible for data abstraction-nine simulated an unmeasured confounder and six simulated misclassification. The majority of studies simulating an unmeasured confounder did not specify the range of plausible estimates for the bias parameters. Studies simulating misclassification were in general clearer when reporting the plausible distribution of bias parameters. Regardless of the bias simulated, the probability distributions assigned to bias parameters, number of simulated iterations, sensitivity analyses, and diagnostics were not discussed in the majority of studies. Despite the prevalence and concern of bias in pharmacoepidemiologic and comparative effectiveness studies, probabilistic bias analysis to quantitatively model the effect of bias was not widely used. The quality of reporting and use of this technique varied and was often unclear. Further discussion and dissemination of the technique are warranted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Shah, Ashwin R.

    1997-01-01

    The properties of ceramic matrix composites (CMC's) are known to display a considerable amount of scatter due to variations in fiber/matrix properties, interphase properties, interphase bonding, amount of matrix voids, and many geometry- or fabrication-related parameters, such as ply thickness and ply orientation. This paper summarizes preliminary studies in which formal probabilistic descriptions of the material-behavior- and fabrication-related parameters were incorporated into micromechanics and macromechanics for CMC'S. In this process two existing methodologies, namely CMC micromechanics and macromechanics analysis and a fast probability integration (FPI) technique are synergistically coupled to obtain the probabilistic composite behavior or response. Preliminary results in the form of cumulative probability distributions and information on the probability sensitivities of the response to primitive variables for a unidirectional silicon carbide/reaction-bonded silicon nitride (SiC/RBSN) CMC are presented. The cumulative distribution functions are computed for composite moduli, thermal expansion coefficients, thermal conductivities, and longitudinal tensile strength at room temperature. The variations in the constituent properties that directly affect these composite properties are accounted for via assumed probabilistic distributions. Collectively, the results show that the present technique provides valuable information about the composite properties and sensitivity factors, which is useful to design or test engineers. Furthermore, the present methodology is computationally more efficient than a standard Monte-Carlo simulation technique; and the agreement between the two solutions is excellent, as shown via select examples.

  2. Assessment of food intake input distributions for use in probabilistic exposure assessments of food additives.

    PubMed

    Gilsenan, M B; Lambe, J; Gibney, M J

    2003-11-01

    A key component of a food chemical exposure assessment using probabilistic analysis is the selection of the most appropriate input distribution to represent exposure variables. The study explored the type of parametric distribution that could be used to model variability in food consumption data likely to be included in a probabilistic exposure assessment of food additives. The goodness-of-fit of a range of continuous distributions to observed data of 22 food categories expressed as average daily intakes among consumers from the North-South Ireland Food Consumption Survey was assessed using the BestFit distribution fitting program. The lognormal distribution was most commonly accepted as a plausible parametric distribution to represent food consumption data when food intakes were expressed as absolute intakes (16/22 foods) and as intakes per kg body weight (18/22 foods). Results from goodness-of-fit tests were accompanied by lognormal probability plots for a number of food categories. The influence on food additive intake of using a lognormal distribution to model food consumption input data was assessed by comparing modelled intake estimates with observed intakes. Results from the present study advise some level of caution about the use of a lognormal distribution as a mode of input for food consumption data in probabilistic food additive exposure assessments and the results highlight the need for further research in this area.

  3. DYT1 dystonia increases risk taking in humans

    PubMed Central

    Arkadir, David; Radulescu, Angela; Raymond, Deborah; Lubarr, Naomi; Bressman, Susan B; Mazzoni, Pietro; Niv, Yael

    2016-01-01

    It has been difficult to link synaptic modification to overt behavioral changes. Rodent models of DYT1 dystonia, a motor disorder caused by a single gene mutation, demonstrate increased long-term potentiation and decreased long-term depression in corticostriatal synapses. Computationally, such asymmetric learning predicts risk taking in probabilistic tasks. Here we demonstrate abnormal risk taking in DYT1 dystonia patients, which is correlated with disease severity, thereby supporting striatal plasticity in shaping choice behavior in humans. DOI: http://dx.doi.org/10.7554/eLife.14155.001 PMID:27249418

  4. Wind power forecasting: IEA Wind Task 36 & future research issues

    NASA Astrophysics Data System (ADS)

    Giebel, G.; Cline, J.; Frank, H.; Shaw, W.; Pinson, P.; Hodge, B.-M.; Kariniotakis, G.; Madsen, J.; Möhrlen, C.

    2016-09-01

    This paper presents the new International Energy Agency Wind Task 36 on Forecasting, and invites to collaborate within the group. Wind power forecasts have been used operatively for over 20 years. Despite this fact, there are still several possibilities to improve the forecasts, both from the weather prediction side and from the usage of the forecasts. The new International Energy Agency (IEA) Task on Forecasting for Wind Energy tries to organise international collaboration, among national meteorological centres with an interest and/or large projects on wind forecast improvements (NOAA, DWD, MetOffice, met.no, DMI,...), operational forecaster and forecast users. The Task is divided in three work packages: Firstly, a collaboration on the improvement of the scientific basis for the wind predictions themselves. This includes numerical weather prediction model physics, but also widely distributed information on accessible datasets. Secondly, we will be aiming at an international pre-standard (an IEA Recommended Practice) on benchmarking and comparing wind power forecasts, including probabilistic forecasts. This WP will also organise benchmarks, in cooperation with the IEA Task WakeBench. Thirdly, we will be engaging end users aiming at dissemination of the best practice in the usage of wind power predictions. As first results, an overview of current issues for research in short-term forecasting of wind power is presented.

  5. Concurrent deployment of visual attention and response selection bottleneck in a dual-task: Electrophysiological and behavioural evidence.

    PubMed

    Reimer, Christina B; Strobach, Tilo; Schubert, Torsten

    2017-12-01

    Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.

  6. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    PubMed

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.

  7. Stimulus-response compatibility and psychological refractory period effects: implications for response selection

    NASA Technical Reports Server (NTRS)

    Lien, Mei-Ching; Proctor, Robert W.

    2002-01-01

    The purpose of this paper was to provide insight into the nature of response selection by reviewing the literature on stimulus-response compatibility (SRC) effects and the psychological refractory period (PRP) effect individually and jointly. The empirical findings and theoretical explanations of SRC effects that have been studied within a single-task context suggest that there are two response-selection routes-automatic activation and intentional translation. In contrast, all major PRP models reviewed in this paper have treated response selection as a single processing stage. In particular, the response-selection bottleneck (RSB) model assumes that the processing of Task 1 and Task 2 comprises two separate streams and that the PRP effect is due to a bottleneck located at response selection. Yet, considerable evidence from studies of SRC in the PRP paradigm shows that the processing of the two tasks is more interactive than is suggested by the RSB model and by most other models of the PRP effect. The major implication drawn from the studies of SRC effects in the PRP context is that response activation is a distinct process from final response selection. Response activation is based on both long-term and short-term task-defined S-R associations and occurs automatically and in parallel for the two tasks. The final response selection is an intentional act required even for highly compatible and practiced tasks and is restricted to processing one task at a time. Investigations of SRC effects and response-selection variables in dual-task contexts should be conducted more systematically because they provide significant insight into the nature of response-selection mechanisms.

  8. Design Of An Intelligent Robotic System Organizer Via Expert System Tecniques

    NASA Astrophysics Data System (ADS)

    Yuan, Peter H.; Valavanis, Kimon P.

    1989-02-01

    Intelligent Robotic Systems are a special type of Intelligent Machines. When modeled based on Vle theory of Intelligent Controls, they are composed of three interactive levels, namely: organization, coordination, and execution, ordered according, to the ,Principle of Increasing, Intelligence with Decreasing Precl.sion. Expert System techniques, are used to design an Intelligent Robotic System Organizer with a dynamic Knowledge Base and an interactive Inference Engine. Task plans are formulated using, either or both of a Probabilistic Approach and Forward Chapling Methodology, depending on pertinent information associated with a spec;fic requested job. The Intelligent Robotic System, Organizer is implemented and tested on a prototype system operating in an uncertain environment. An evaluation of-the performance, of the prototype system is conducted based upon the probability of generating a successful task sequence versus the number of trials taken by the organizer.

  9. Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-01

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  10. New ShakeMaps for Georgia Resulting from Collaboration with EMME

    NASA Astrophysics Data System (ADS)

    Kvavadze, N.; Tsereteli, N. S.; Varazanashvili, O.; Alania, V.

    2015-12-01

    Correct assessment of probabilistic seismic hazard and risks maps are first step for advance planning and action to reduce seismic risk. Seismic hazard maps for Georgia were calculated based on modern approach that was developed in the frame of EMME (Earthquake Modl for Middle east region) project. EMME was one of GEM's successful endeavors at regional level. With EMME and GEM assistance, regional models were analyzed to identify the information and additional work needed for the preparation national hazard models. Probabilistic seismic hazard map (PSH) provides the critical bases for improved building code and construction. The most serious deficiency in PSH assessment for the territory of Georgia is the lack of high-quality ground motion data. Due to this an initial hybrid empirical ground motion model is developed for PGA and SA at selected periods. An application of these coefficients for ground motion models have been used in probabilistic seismic hazard assessment. Obtained results of seismic hazard maps show evidence that there were gaps in seismic hazard assessment and the present normative seismic hazard map needed a careful recalculation.

  11. Software for Probabilistic Risk Reduction

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.

  12. Three-dimensional Probabilistic Earthquake Location Applied to 2002-2003 Mt. Etna Eruption

    NASA Astrophysics Data System (ADS)

    Mostaccio, A.; Tuve', T.; Zuccarello, L.; Patane', D.; Saccorotti, G.; D'Agostino, M.

    2005-12-01

    Recorded seismicity for the Mt. Etna volcano, occurred during the 2002-2003 eruption, has been relocated using a probabilistic, non-linear, earthquake location approach. We used the software package NonLinLoc (Lomax et al., 2000) adopting the 3D velocity model obtained by Cocina et al., 2005. We applied our data through different algorithms: (1) via a grid-search; (2) via a Metropolis-Gibbs; and (3) via an Oct-tree. The Oct-Tree algorithm gives efficient, faster and accurate mapping of the PDF (Probability Density Function) of the earthquake location problem. More than 300 seismic events were analyzed in order to compare non-linear location results with the ones obtained by using traditional, linearized earthquake location algorithm such as Hypoellipse, and a 3D linearized inversion (Thurber, 1983). Moreover, we compare 38 focal mechanisms, chosen following stricta criteria selection, with the ones obtained by the 3D and 1D results. Although the presented approach is more of a traditional relocation application, probabilistic earthquake location could be used in routinely survey.

  13. Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)

    NASA Astrophysics Data System (ADS)

    Peters, Christina; Malz, Alex; Hlozek, Renée

    2018-01-01

    The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.

  14. Effects of task-irrelevant grouping on visual selection in partial report.

    PubMed

    Lunau, Rasmus; Habekost, Thomas

    2017-07-01

    Perceptual grouping modulates performance in attention tasks such as partial report and change detection. Specifically, grouping of search items according to a task-relevant feature improves the efficiency of visual selection. However, the role of task-irrelevant feature grouping is not clearly understood. In the present study, we investigated whether grouping of targets by a task-irrelevant feature influences performance in a partial-report task. In this task, participants must report as many target letters as possible from a briefly presented circular display. The crucial manipulation concerned the color of the elements in these trials. In the sorted-color condition, the color of the display elements was arranged according to the selection criterion, and in the unsorted-color condition, colors were randomly assigned. The distractor cost was inferred by subtracting performance in partial-report trials from performance in a control condition that had no distractors in the display. Across five experiments, we manipulated trial order, selection criterion, and exposure duration, and found that attentional selectivity was improved in sorted-color trials when the exposure duration was 200 ms and the selection criterion was luminance. This effect was accompanied by impaired selectivity in unsorted-color trials. Overall, the results suggest that the benefit of task-irrelevant color grouping of targets is contingent on the processing locus of the selection criterion.

  15. DEVELOPMENT OF AN INDEX OF BIOTIC INTEGRITY FOR THE MID-ATLANTIC HIGHLANDS REGION

    EPA Science Inventory

    From 1993 to 1996, fish assemblage data were collected from 309 wadeable streams in the U.S. Mid-Atlantic Highlands region as part of the U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program. Stream sites were selected with a probabilistic sampl...

  16. Scaling in the Donangelo-Sneppen model for evolution of money

    NASA Astrophysics Data System (ADS)

    Stauffer, Dietrich; P. Radomski, Jan

    2001-03-01

    The evolution of money from unsuccessful barter attempts, as modeled by Donangelo and Sneppen, is modified by a deterministic instead of a probabilistic selection of the most desired product as money. We check in particular the characteristic times of the model as a function of system size.

  17. Expert Design Advisor

    DTIC Science & Technology

    1990-10-01

    to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both

  18. Reasoning about Independence in Probabilistic Models of Relational Data (Author’s Manuscript)

    DTIC Science & Technology

    2014-01-06

    for relational variables from A’s perspective, and this result is also applicable to one-to-many data.) To illustrate this fact more concretely ...separators. Technical Report R-254, UCLA Computer Science Department, February 1998. Robert Tibshirani. Regression shrinkage and selection via the lasso

  19. Computational Methods for Probabilistic Target Tracking Problems

    DTIC Science & Technology

    2007-09-01

    he is working with the Aegis Ballistic Missile Defense System (ABMD) in the Command and Decision (C&D) section. He has recently been selected from a...employed by Progress Energy as an Auxillary Operator at the Brunswick Nuclear Plant, in Southport NC. He is studying to qualify as an NRC licensed nuclear

  20. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    NASA Astrophysics Data System (ADS)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  1. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  2. The Effect of Hierarchical Task Representations on Task Selection in Voluntary Task Switching

    ERIC Educational Resources Information Center

    Weaver, Starla M.; Arrington, Catherine M.

    2013-01-01

    The current study explored the potential for hierarchical representations to influence action selection during voluntary task switching. Participants switched between 4 individual task elements. In Experiment 1, participants were encouraged to represent the task elements as grouped within a hierarchy based on experimental manipulations of varying…

  3. Connections between Secondary Mathematics Teachers' Beliefs and Their Selection of Tasks for English Language Learners

    ERIC Educational Resources Information Center

    de Araujo, Zandra

    2017-01-01

    The tasks teachers select impact students' opportunities to learn mathematics and teachers' beliefs influence their choice of tasks. Through the qualitative analysis of surveys, interviews and classroom artefacts from three secondary mathematics teachers, this study examined teachers' selection of mathematics tasks for English language learners…

  4. Behavioral and Electrophysiological Alterations for Reinforcement Learning in Manic and Euthymic Patients with Bipolar Disorder.

    PubMed

    Ryu, Vin; Ha, Ra Yeon; Lee, Su Jin; Ha, Kyooseob; Cho, Hyun-Sang

    2017-03-01

    Bipolar disorder is characterized by behavioral changes such as risk-taking and increasing goal-directed activities, which may result from altered reward processing. Patients with bipolar disorder show impaired reward learning in situations that require the integration of reinforced feedback over time. In this study, we examined the behavioral and electrophysiological characteristics of reward learning in manic and euthymic patients with bipolar disorder using a probabilistic reward task. Twenty-four manic and 20 euthymic patients with bipolar I disorder and 24 healthy control subjects performed the probabilistic reward task. We assessed response bias (RB) as a preference for the stimulus paired with the more frequent reward and feedback-related negativity (FRN) to correct identification of the rich stimulus. Both manic and euthymic patients showed significantly lower RB scores in the early learning stage (block 1) in comparison with the late learning stage (block 2 or block 3) of the task, as well as significantly lower RB scores in the early stage compared to healthy subjects. Relatively more negative FRN amplitude is elicited by no presentation of an expected reward, compared to that elicited by presentation of expected feedback. The FRN became significantly more negative from the early (block 1) to the later stages (blocks 2 and 3) in both manic and euthymic patients, but not in healthy subjects. Changes in RB scores and FRN amplitudes between blocks 2 and 3 and block 1 correlated positively in healthy controls, but correlated negatively in manic and euthymic patients. The severity of manic symptoms correlated positively with reward learning scores and negatively with the FRN. These findings suggest that patients with bipolar disorder during euthymic or manic states have behavioral and electrophysiological alterations in reward learning compared to healthy subjects. This dysfunctional reward processing may be related to the abnormal decision-making or altered goal-directed activities frequently seen in patients with bipolar disorder. © 2017 John Wiley & Sons Ltd.

  5. A further assessment of decision-making in anorexia nervosa.

    PubMed

    Adoue, C; Jaussent, I; Olié, E; Beziat, S; Van den Eynde, F; Courtet, P; Guillaume, S

    2015-01-01

    Anorexia nervosa (AN) may be associated with impaired decision-making. Cognitive processes underlying this impairment remain unclear, mainly because previous assessments of this complex cognitive function were completed with a single test. Furthermore, clinical features such as mood status may impact this association. We aim to further explore the hypothesis of altered decision-making in AN. Sixty-three adult women with AN and 49 female controls completed a clinical assessment and were assessed by three tasks related to decision-making [Iowa Gambling Task (IGT), Balloon Analogue Risk Task (BART), Probabilistic Reversal Learning Task (PRLT)]. People with AN had poorer performance on the IGT and made less risky choices on the BART, whereas performances were not different on PRLT. Notably, AN patients with a current major depressive disorder showed similar performance to those with no current major depressive disorder. These results tend to confirm an impaired decision making-process in people with AN and suggest that various cognitive processes such as inhibition to risk-taking or intolerance of uncertainty may underlie this condition Furthermore, these impairments seem unrelated to the potential co-occurent major depressive disorders. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  6. A neurocomputational account of cognitive deficits in Parkinson’s disease

    PubMed Central

    Hélie, Sébastien; Paul, Erick J.; Ashby, F. Gregory

    2014-01-01

    Parkinson’s disease (PD) is caused by the accelerated death of dopamine (DA) producing neurons. Numerous studies documenting cognitive deficits of PD patients have revealed impairments in a variety of tasks related to memory, learning, visuospatial skills, and attention. While there have been several studies documenting cognitive deficits of PD patients, very few computational models have been proposed. In this article, we use the COVIS model of category learning to simulate DA depletion and show that the model suffers from cognitive symptoms similar to those of human participants affected by PD. Specifically, DA depletion in COVIS produced deficits in rule-based categorization, non-linear information-integration categorization, probabilistic classification, rule maintenance, and rule switching. These were observed by simulating results from younger controls, older controls, PD patients, and severe PD patients in five well-known tasks. Differential performance among the different age groups and clinical populations was modeled simply by changing the amount of DA available in the model. This suggests that COVIS may not only be an adequate model of the simulated tasks and phenomena but also more generally of the role of DA in these tasks and phenomena. PMID:22683450

  7. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  8. Universal statistics of selected values

    NASA Astrophysics Data System (ADS)

    Smerlak, Matteo; Youssef, Ahmed

    2017-03-01

    Selection, the tendency of some traits to become more frequent than others under the influence of some (natural or artificial) agency, is a key component of Darwinian evolution and countless other natural and social phenomena. Yet a general theory of selection, analogous to the Fisher-Tippett-Gnedenko theory of extreme events, is lacking. Here we introduce a probabilistic definition of selection and show that selected values are attracted to a universal family of limiting distributions which generalize the log-normal distribution. The universality classes and scaling exponents are determined by the tail thickness of the random variable under selection. Our results provide a possible explanation for skewed distributions observed in diverse contexts where selection plays a key role, from molecular biology to agriculture and sport.

  9. A systematic approach to selecting task relevant neurons.

    PubMed

    Kahn, Kevin; Saxena, Shreya; Eskandar, Emad; Thakor, Nitish; Schieber, Marc; Gale, John T; Averbeck, Bruno; Eden, Uri; Sarma, Sridevi V

    2015-04-30

    Since task related neurons cannot be specifically targeted during surgery, a critical decision to make is to select which neurons are task-related when performing data analysis. Including neurons unrelated to the task degrade decoding accuracy and confound neurophysiological results. Traditionally, task-related neurons are selected as those with significant changes in firing rate when a stimulus is applied. However, this assumes that neurons' encoding of stimuli are dominated by their firing rate with little regard to temporal dynamics. This paper proposes a systematic approach for neuron selection, which uses a likelihood ratio test to capture the contribution of stimulus to spiking activity while taking into account task-irrelevant intrinsic dynamics that affect firing rates. This approach is denoted as the model deterioration excluding stimulus (MDES) test. MDES is compared to firing rate selection in four case studies: a simulation, a decoding example, and two neurophysiology examples. The MDES rankings in the simulation match closely with ideal rankings, while firing rate rankings are skewed by task-irrelevant parameters. For decoding, 95% accuracy is achieved using the top 8 MDES-ranked neurons, while the top 12 firing-rate ranked neurons are needed. In the neurophysiological examples, MDES matches published results when firing rates do encode salient stimulus information, and uncovers oscillatory modulations in task-related neurons that are not captured when neurons are selected using firing rates. These case studies illustrate the importance of accounting for intrinsic dynamics when selecting task-related neurons and following the MDES approach accomplishes that. MDES selects neurons that encode task-related information irrespective of these intrinsic dynamics which can bias firing rate based selection. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA

    NASA Technical Reports Server (NTRS)

    Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.

    2017-01-01

    Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.

  11. Age-related differences in idea generation and selection for propositional language.

    PubMed

    Madden, Daniel L; Sale, Martin V; Robinson, Gail A

    2018-05-21

    Conceptual preparation mechanisms such as novel idea generation and selection from amongst competing alternatives are critical for language production and may contribute to age-related language deficits. This study investigated whether older adults show diminished idea generation and selection abilities, compared to younger adults. Twenty younger (18-35 years) and 20 older (60-80 years) adults completed two novel experimental tasks, an idea generation task and a selection task. Older participants were slower than younger participants overall on both tasks. Importantly, this difference was more pronounced for task conditions with greater demands on generation and selection. Older adults were also significantly reduced on a semantic, but not phonemic, word fluency task. Overall, the older group showed evidence of age-related decline specific to idea generation and selection ability. This has implications for the message formulation stage of propositional language decline in normal aging.

  12. A probabilistic union model with automatic order selection for noisy speech recognition.

    PubMed

    Jancovic, P; Ming, J

    2001-09-01

    A critical issue in exploiting the potential of the sub-band-based approach to robust speech recognition is the method of combining the sub-band observations, for selecting the bands unaffected by noise. A new method for this purpose, i.e., the probabilistic union model, was recently introduced. This model has been shown to be capable of dealing with band-limited corruption, requiring no knowledge about the band position and statistical distribution of the noise. A parameter within the model, which we call its order, gives the best results when it equals the number of noisy bands. Since this information may not be available in practice, in this paper we introduce an automatic algorithm for selecting the order, based on the state duration pattern generated by the hidden Markov model (HMM). The algorithm has been tested on the TIDIGITS database corrupted by various types of additive band-limited noise with unknown noisy bands. The results have shown that the union model equipped with the new algorithm can achieve a recognition performance similar to that achieved when the number of noisy bands is known. The results show a very significant improvement over the traditional full-band model, without requiring prior information on either the position or the number of noisy bands. The principle of the algorithm for selecting the order based on state duration may also be applied to other sub-band combination methods.

  13. A Chain-Retrieval Model for Voluntary Task Switching

    ERIC Educational Resources Information Center

    Vandierendonck, Andre; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick

    2012-01-01

    To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved…

  14. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  15. Learning Probabilistic Features for Robotic Navigation Using Laser Sensors

    PubMed Central

    Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377

  16. Learning probabilistic features for robotic navigation using laser sensors.

    PubMed

    Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José

    2014-01-01

    SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

  17. A Bayesian Attractor Model for Perceptual Decision Making

    PubMed Central

    Bitzer, Sebastian; Bruineberg, Jelle; Kiebel, Stefan J.

    2015-01-01

    Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks. PMID:26267143

  18. Impairment of probabilistic reward-based learning in schizophrenia.

    PubMed

    Weiler, Julia A; Bellebaum, Christian; Brüne, Martin; Juckel, Georg; Daum, Irene

    2009-09-01

    Recent models assume that some symptoms of schizophrenia originate from defective reward processing mechanisms. Understanding the precise nature of reward-based learning impairments might thus make an important contribution to the understanding of schizophrenia and the development of treatment strategies. The present study investigated several features of probabilistic reward-based stimulus association learning, namely the acquisition of initial contingencies, reversal learning, generalization abilities, and the effects of reward magnitude. Compared to healthy controls, individuals with schizophrenia exhibited attenuated overall performance during acquisition, whereas learning rates across blocks were similar to the rates of controls. On the group level, persons with schizophrenia were, however, unable to learn the reversal of the initial reward contingencies. Exploratory analysis of only the subgroup of individuals with schizophrenia who showed significant learning during acquisition yielded deficits in reversal learning with low reward magnitudes only. There was further evidence of a mild generalization impairment of the persons with schizophrenia in an acquired equivalence task. In summary, although there was evidence of intact basic processing of reward magnitudes, individuals with schizophrenia were impaired at using this feedback for the adaptive guidance of behavior.

  19. Probabilistic Learning in Junior High School: Investigation of Student Probabilistic Thinking Levels

    NASA Astrophysics Data System (ADS)

    Kurniasih, R.; Sujadi, I.

    2017-09-01

    This paper was to investigate level on students’ probabilistic thinking. Probabilistic thinking level is level of probabilistic thinking. Probabilistic thinking is thinking about probabilistic or uncertainty matter in probability material. The research’s subject was students in grade 8th Junior High School students. The main instrument is a researcher and a supporting instrument is probabilistic thinking skills test and interview guidelines. Data was analyzed using triangulation method. The results showed that the level of students probabilistic thinking before obtaining a teaching opportunity at the level of subjective and transitional. After the students’ learning level probabilistic thinking is changing. Based on the results of research there are some students who have in 8th grade level probabilistic thinking numerically highest of levels. Level of students’ probabilistic thinking can be used as a reference to make a learning material and strategy.

  20. Standing balance in individuals with Parkinson's disease during single and dual-task conditions.

    PubMed

    Fernandes, Ângela; Coelho, Tiago; Vitória, Ana; Ferreira, Augusto; Santos, Rubim; Rocha, Nuno; Fernandes, Lia; Tavares, João Manuel R S

    2015-09-01

    This study aimed to examine the differences in standing balance between individuals with Parkinson's disease (PD) and subjects without PD (control group), under single and dual-task conditions. A cross-sectional study was designed using a non-probabilistic sample of 110 individuals (50 participants with PD and 60 controls) aged 50 years old and over. The individuals with PD were in the early or middle stages of the disease (characterized by Hoehn and Yahr as stages 1-3). The standing balance was assessed by measuring the centre of pressure (CoP) displacement in single-task (eyes-open/eyes-closed) and dual-task (while performing two different verbal fluency tasks). No significant differences were found between the groups regarding sociodemographic variables. In general, the standing balance of the individuals with PD was worse than the controls, as the CoP displacement across tasks was significantly higher for the individuals with PD (p<0.01), both in anteroposterior and mediolateral directions. Moreover, there were significant differences in the CoP displacement based parameters between the conditions, mainly between the eyes-open condition and the remaining conditions. However, there was no significant interaction found between group and condition, which suggests that changes in the CoP displacement between tasks were not influenced by having PD. In conclusion, this study shows that, although individuals with PD had a worse overall standing balance than individuals without the disease, the impact of performing an additional task on the CoP displacement is similar for both groups. Copyright © 2015 Elsevier B.V. All rights reserved.

Top