Sample records for probabilistic mixture model

  1. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  2. Probabilistic modeling of bifurcations in single-cell gene expression data using a Bayesian mixture of factor analyzers.

    PubMed

    Campbell, Kieran R; Yau, Christopher

    2017-03-15

    Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.

  3. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  4. Mixture Modeling for Background and Sources Separation in x-ray Astronomical Images

    NASA Astrophysics Data System (ADS)

    Guglielmetti, Fabrizia; Fischer, Rainer; Dose, Volker

    2004-11-01

    A probabilistic technique for the joint estimation of background and sources in high-energy astrophysics is described. Bayesian probability theory is applied to gain insight into the coexistence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. The present analysis is applied to ROSAT PSPC data (0.1-2.4 keV) in Survey Mode. A background map is modelled using a Thin-Plate spline. Source probability maps are obtained for each pixel (45 arcsec) independently and for larger correlation lengths, revealing faint and extended sources. We will demonstrate that the described probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS) used for the production of the ROSAT All-Sky Survey (RASS) catalogues.

  5. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    PubMed

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Modular analysis of the probabilistic genetic interaction network.

    PubMed

    Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting

    2011-03-15

    Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.

  7. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  8. A Mixtures-of-Trees Framework for Multi-Label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011

  9. Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.

    PubMed

    Li, Haoxiang; Hua, Gang

    2018-04-01

    Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.

  10. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  11. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Thermal conductivity of heterogeneous mixtures and lunar soils

    NASA Technical Reports Server (NTRS)

    Vachon, R. I.; Prakouras, A. G.; Crane, R.; Khader, M. S.

    1973-01-01

    The theoretical evaluation of the effective thermal conductivity of granular materials is discussed with emphasis upon the heat transport properties of lunar soil. The following types of models are compared: probabilistic, parallel isotherm, stochastic, lunar, and a model based on nonlinear heat flow system synthesis.

  13. Quantification and Segmentation of Brain Tissues from MR Images: A Probabilistic Neural Network Approach

    PubMed Central

    Wang, Yue; Adalý, Tülay; Kung, Sun-Yuan; Szabo, Zsolt

    2007-01-01

    This paper presents a probabilistic neural network based technique for unsupervised quantification and segmentation of brain tissues from magnetic resonance images. It is shown that this problem can be solved by distribution learning and relaxation labeling, resulting in an efficient method that may be particularly useful in quantifying and segmenting abnormal brain tissues where the number of tissue types is unknown and the distributions of tissue types heavily overlap. The new technique uses suitable statistical models for both the pixel and context images and formulates the problem in terms of model-histogram fitting and global consistency labeling. The quantification is achieved by probabilistic self-organizing mixtures and the segmentation by a probabilistic constraint relaxation network. The experimental results show the efficient and robust performance of the new algorithm and that it outperforms the conventional classification based approaches. PMID:18172510

  14. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Probabilistic n/γ discrimination with robustness against outliers for use in neutron profile monitors

    NASA Astrophysics Data System (ADS)

    Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.

    2017-08-01

    A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.

  16. A Bayesian framework based on a Gaussian mixture model and radial-basis-function Fisher discriminant analysis (BayGmmKda V1.1) for spatial prediction of floods

    NASA Astrophysics Data System (ADS)

    Tien Bui, Dieu; Hoang, Nhat-Duc

    2017-09-01

    In this study, a probabilistic model, named as BayGmmKda, is proposed for flood susceptibility assessment in a study area in central Vietnam. The new model is a Bayesian framework constructed by a combination of a Gaussian mixture model (GMM), radial-basis-function Fisher discriminant analysis (RBFDA), and a geographic information system (GIS) database. In the Bayesian framework, GMM is used for modeling the data distribution of flood-influencing factors in the GIS database, whereas RBFDA is utilized to construct a latent variable that aims at enhancing the model performance. As a result, the posterior probabilistic output of the BayGmmKda model is used as flood susceptibility index. Experiment results showed that the proposed hybrid framework is superior to other benchmark models, including the adaptive neuro-fuzzy inference system and the support vector machine. To facilitate the model implementation, a software program of BayGmmKda has been developed in MATLAB. The BayGmmKda program can accurately establish a flood susceptibility map for the study region. Accordingly, local authorities can overlay this susceptibility map onto various land-use maps for the purpose of land-use planning or management.

  17. Design of Probabilistic Random Forests with Applications to Anticancer Drug Sensitivity Prediction

    PubMed Central

    Rahman, Raziur; Haider, Saad; Ghosh, Souparno; Pal, Ranadip

    2015-01-01

    Random forests consisting of an ensemble of regression trees with equal weights are frequently used for design of predictive models. In this article, we consider an extension of the methodology by representing the regression trees in the form of probabilistic trees and analyzing the nature of heteroscedasticity. The probabilistic tree representation allows for analytical computation of confidence intervals (CIs), and the tree weight optimization is expected to provide stricter CIs with comparable performance in mean error. We approached the ensemble of probabilistic trees’ prediction from the perspectives of a mixture distribution and as a weighted sum of correlated random variables. We applied our methodology to the drug sensitivity prediction problem on synthetic and cancer cell line encyclopedia dataset and illustrated that tree weights can be selected to reduce the average length of the CI without increase in mean error. PMID:27081304

  18. Use of the LUS in sequence allele designations to facilitate probabilistic genotyping of NGS-based STR typing results.

    PubMed

    Just, Rebecca S; Irwin, Jodi A

    2018-05-01

    Some of the expected advantages of next generation sequencing (NGS) for short tandem repeat (STR) typing include enhanced mixture detection and genotype resolution via sequence variation among non-homologous alleles of the same length. However, at the same time that NGS methods for forensic DNA typing have advanced in recent years, many caseworking laboratories have implemented or are transitioning to probabilistic genotyping to assist the interpretation of complex autosomal STR typing results. Current probabilistic software programs are designed for length-based data, and were not intended to accommodate sequence strings as the product input. Yet to leverage the benefits of NGS for enhanced genotyping and mixture deconvolution, the sequence variation among same-length products must be utilized in some form. Here, we propose use of the longest uninterrupted stretch (LUS) in allele designations as a simple method to represent sequence variation within the STR repeat regions and facilitate - in the nearterm - probabilistic interpretation of NGS-based typing results. An examination of published population data indicated that a reference LUS region is straightforward to define for most autosomal STR loci, and that using repeat unit plus LUS length as the allele designator can represent greater than 80% of the alleles detected by sequencing. A proof of concept study performed using a freely available probabilistic software demonstrated that the LUS length can be used in allele designations when a program does not require alleles to be integers, and that utilizing sequence information improves interpretation of both single-source and mixed contributor STR typing results as compared to using repeat unit information alone. The LUS concept for allele designation maintains the repeat-based allele nomenclature that will permit backward compatibility to extant STR databases, and the LUS lengths themselves will be concordant regardless of the NGS assay or analysis tools employed. Further, these biologically based, easy-to-derive designations uphold clear relationships between parent alleles and their stutter products, enabling analysis in fully continuous probabilistic programs that model stutter while avoiding the algorithmic complexities that come with string based searches. Though using repeat unit plus LUS length as the allele designator does not capture variation that occurs outside of the core repeat regions, this straightforward approach would permit the large majority of known STR sequence variation to be used for mixture deconvolution and, in turn, result in more informative mixture statistics in the near term. Ultimately, the method could bridge the gap from current length-based probabilistic systems to facilitate broader adoption of NGS by forensic DNA testing laboratories. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Effects of additional data on Bayesian clustering.

    PubMed

    Yamazaki, Keisuke

    2017-10-01

    Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A Generalized Mixture Framework for Multi-label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069

  1. A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons

    DTIC Science & Technology

    2001-07-01

    parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for

  2. The Manhattan Frame Model-Manhattan World Inference in the Space of Surface Normals.

    PubMed

    Straub, Julian; Freifeld, Oren; Rosman, Guy; Leonard, John J; Fisher, John W

    2018-01-01

    Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of an MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.

  3. A generative probabilistic model and discriminative extensions for brain lesion segmentation – with application to tumor and stroke

    PubMed Central

    Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-01-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702

  4. A Generative Probabilistic Model and Discriminative Extensions for Brain Lesion Segmentation--With Application to Tumor and Stroke.

    PubMed

    Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-04-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.

  5. A probabilistic cellular automata model for the dynamics of a population driven by logistic growth and weak Allee effect

    NASA Astrophysics Data System (ADS)

    Mendonça, J. R. G.

    2018-04-01

    We propose and investigate a one-parameter probabilistic mixture of one-dimensional elementary cellular automata under the guise of a model for the dynamics of a single-species unstructured population with nonoverlapping generations in which individuals have smaller probability of reproducing and surviving in a crowded neighbourhood but also suffer from isolation and dispersal. Remarkably, the first-order mean field approximation to the dynamics of the model yields a cubic map containing terms representing both logistic and weak Allee effects. The model has a single absorbing state devoid of individuals, but depending on the reproduction and survival probabilities can achieve a stable population. We determine the critical probability separating these two phases and find that the phase transition between them is in the directed percolation universality class of critical behaviour.

  6. Photometric redshift estimation via deep learning. Generalized and pre-classification-less, image based, fully probabilistic redshifts

    NASA Astrophysics Data System (ADS)

    D'Isanto, A.; Polsterer, K. L.

    2018-01-01

    Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.

  7. Internal validation of STRmix™ for the interpretation of single source and mixed DNA profiles.

    PubMed

    Moretti, Tamyra R; Just, Rebecca S; Kehl, Susannah C; Willis, Leah E; Buckleton, John S; Bright, Jo-Anne; Taylor, Duncan A; Onorato, Anthony J

    2017-07-01

    The interpretation of DNA evidence can entail analysis of challenging STR typing results. Genotypes inferred from low quality or quantity specimens, or mixed DNA samples originating from multiple contributors, can result in weak or inconclusive match probabilities when a binary interpretation method and necessary thresholds (such as a stochastic threshold) are employed. Probabilistic genotyping approaches, such as fully continuous methods that incorporate empirically determined biological parameter models, enable usage of more of the profile information and reduce subjectivity in interpretation. As a result, software-based probabilistic analyses tend to produce more consistent and more informative results regarding potential contributors to DNA evidence. Studies to assess and internally validate the probabilistic genotyping software STRmix™ for casework usage at the Federal Bureau of Investigation Laboratory were conducted using lab-specific parameters and more than 300 single-source and mixed contributor profiles. Simulated forensic specimens, including constructed mixtures that included DNA from two to five donors across a broad range of template amounts and contributor proportions, were used to examine the sensitivity and specificity of the system via more than 60,000 tests comparing hundreds of known contributors and non-contributors to the specimens. Conditioned analyses, concurrent interpretation of amplification replicates, and application of an incorrect contributor number were also performed to further investigate software performance and probe the limitations of the system. In addition, the results from manual and probabilistic interpretation of both prepared and evidentiary mixtures were compared. The findings support that STRmix™ is sufficiently robust for implementation in forensic laboratories, offering numerous advantages over historical methods of DNA profile analysis and greater statistical power for the estimation of evidentiary weight, and can be used reliably in human identification testing. With few exceptions, likelihood ratio results reflected intuitively correct estimates of the weight of the genotype possibilities and known contributor genotypes. This comprehensive evaluation provides a model in accordance with SWGDAM recommendations for internal validation of a probabilistic genotyping system for DNA evidence interpretation. Copyright © 2017. Published by Elsevier B.V.

  8. N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models

    USGS Publications Warehouse

    Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.

    2018-01-01

    Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.

  9. A model-based test for treatment effects with probabilistic classifications.

    PubMed

    Cavagnaro, Daniel R; Davis-Stober, Clintin P

    2018-05-21

    Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher's exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher's exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie

    Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less

  11. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  12. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  13. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  14. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Structure-aware depth super-resolution using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon

    2015-03-01

    This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

  16. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  17. Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.

    PubMed

    Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L

    2012-01-01

    In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.

  18. Probabilistic human health risk assessment of degradation-related chemical mixtures in heterogeneous aquifers: Risk statistics, hot spots, and preferential channels

    NASA Astrophysics Data System (ADS)

    Henri, Christopher V.; Fernàndez-Garcia, Daniel; de Barros, Felipe P. J.

    2015-06-01

    The increasing presence of toxic chemicals released in the subsurface has led to a rapid growth of social concerns and the need to develop and employ models that can predict the impact of groundwater contamination on human health risk under uncertainty. Monitored natural attenuation is a common remediation action in many contamination cases. However, natural attenuation can lead to the production of daughter species of distinct toxicity that may pose challenges in pollution management strategies. The actual threat that these contaminants pose to human health depends on the interplay between the complex structure of the geological media and the toxicity of each pollutant byproduct. This work addresses human health risk for chemical mixtures resulting from the sequential degradation of a contaminant (such as a chlorinated solvent) under uncertainty through high-resolution three-dimensional numerical simulations. We systematically investigate the interaction between aquifer heterogeneity, flow connectivity, contaminant injection model, and chemical toxicity in the probabilistic characterization of health risk. We illustrate how chemical-specific travel times control the regime of the expected risk and its corresponding uncertainties. Results indicate conditions where preferential flow paths can favor the reduction of the overall risk of the chemical mixture. The overall human risk response to aquifer connectivity is shown to be nontrivial for multispecies transport. This nontriviality is a result of the interaction between aquifer heterogeneity and chemical toxicity. To quantify the joint effect of connectivity and toxicity in health risk, we propose a toxicity-based Damköhler number. Furthermore, we provide a statistical characterization in terms of low-order moments and the probability density function of the individual and total risks.

  19. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration*

    PubMed Central

    Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C.; Beymer, David; Rangarajan, Anand

    2010-01-01

    In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions – specifically Mixture of Gaussians – estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes. PMID:20426043

  20. Closed-form Jensen-Renyi divergence for mixture of Gaussians and applications to group-wise shape registration.

    PubMed

    Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C; Beymer, David; Rangarajan, Anand

    2009-01-01

    In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions--specifically Mixture of Gaussians--estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes.

  1. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  2. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    PubMed

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  3. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  4. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  5. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  6. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  7. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    NASA Astrophysics Data System (ADS)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  8. Identification of Gustatory–Olfactory Flavor Mixtures: Effects of Linguistic Labeling

    PubMed Central

    2013-01-01

    Two experiments, using different ranges and numbers of stimuli, examined how linguistic labels affect the identification of flavor mixtures containing different proportions of sucrose (gustatory flavorant) and citral (olfactory flavorant). Both experiments asked subjects to identify each stimulus as having either “mostly sugar” or “mostly citrus.” In one condition, no labels preceded the flavor stimuli. In another condition, each flavor stimulus followed a label, either SUGAR or CITRUS, which, the subjects were informed, usually though not always named the stronger flavor component; that is, the labels were probabilistically valid. The results of both experiments showed that the labels systematically modified the identification responses: Subjects responded “sugar” or “citrus” more often when the flavor stimulus followed the corresponding label, SUGAR or CITRUS. But the labels hardly affected overall accuracy of identification. Accuracy was possibly limited, however, by both the confusability of the flavor stimuli per se and the way that confusability could limit the opportunity to discern the probabilistic associations between labels and individual flavor stimuli. We describe the results in terms of a decision-theoretic model, in which labels induce shifts in response criteria governing the identification responses, or possibly effect changes in the sensory representations of the flavorants themselves. PMID:23329730

  9. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  10. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  11. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment ismore » achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.« less

  12. Community detection for networks with unipartite and bipartite structure

    NASA Astrophysics Data System (ADS)

    Chang, Chang; Tang, Chao

    2014-09-01

    Finding community structures in networks is important in network science, technology, and applications. To date, most algorithms that aim to find community structures only focus either on unipartite or bipartite networks. A unipartite network consists of one set of nodes and a bipartite network consists of two nonoverlapping sets of nodes with only links joining the nodes in different sets. However, a third type of network exists, defined here as the mixture network. Just like a bipartite network, a mixture network also consists of two sets of nodes, but some nodes may simultaneously belong to two sets, which breaks the nonoverlapping restriction of a bipartite network. The mixture network can be considered as a general case, with unipartite and bipartite networks viewed as its limiting cases. A mixture network can represent not only all the unipartite and bipartite networks, but also a wide range of real-world networks that cannot be properly represented as either unipartite or bipartite networks in fields such as biology and social science. Based on this observation, we first propose a probabilistic model that can find modules in unipartite, bipartite, and mixture networks in a unified framework based on the link community model for a unipartite undirected network [B Ball et al (2011 Phys. Rev. E 84 036103)]. We test our algorithm on synthetic networks (both overlapping and nonoverlapping communities) and apply it to two real-world networks: a southern women bipartite network and a human transcriptional regulatory mixture network. The results suggest that our model performs well for all three types of networks, is competitive with other algorithms for unipartite or bipartite networks, and is applicable to real-world networks.

  13. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  14. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  15. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  16. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    PubMed

    Freire, Paulo G L; Ferrari, Ricardo J

    2016-06-01

    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  18. Continuous Probabilistic Modeling of Tracer Stone Dispersal in Upper Regime

    NASA Astrophysics Data System (ADS)

    Hernandez Moreira, R. R.; Viparelli, E.

    2017-12-01

    Morphodynamic models that specifically account for the non-uniformity of the bed material are generally based on some form of the active layer approximation. These models have proven to be useful tools in the study of transport, erosion and deposition of non-uniform bed material in the case of channel bed aggradation and degradation. However, when local spatial effects over short time scales compared to those characterizing the changes in mean bed elevation dominate the vertical sediment fluxes, as is the presence of bedforms, active layer models cannot capture key details of the sediment transport process. To overcome the limitations of active layer based models, Parker, Paola and Leclair (PPL) proposed a continuous probabilistic modeling frameworks in which the sediment exchange between the bedload transport and the mobile bed is described in terms of probability density functions of bed elevation, entrainment and deposition. Here we present the implementation of a modified version of the PPL modeling framework for the study of tracer stones dispsersal in upper regime bedload transport conditions (i.e. upper regime plane bed at the transition between dunes and antidunes, downstream migrating antidunes and upper regime plane bed with bedload transport in sheet flow mode) in which the probability functions are based on measured time series of bed elevation fluctuations. The extension to the more general case of mixtures of sediments differing in size is the future development of the proposed work.

  19. Correlation of foodstuffs with ethanol-water mixtures with regard to the solubility of migrants from food contact materials.

    PubMed

    Seiler, Annika; Bach, Aurélie; Driffield, Malcolm; Paseiro Losada, Perfecto; Mercea, Peter; Tosa, Valer; Franz, Roland

    2014-01-01

    Today most foods are available in a packed form. During storage, the migration of chemical substances from food packaging materials into food may occur and may therefore be a potential source of consumer exposure. To protect the consumer, standard migration tests are laid down in Regulation (EU) No. 10/2011. When using those migration tests and applying additional conservative conventions, estimated exposure is linked with large uncertainties including a certain margin of safety. Thus the research project FACET was initiated within the 7th Framework Programme of the European Commission with the aim of developing a probabilistic migration modelling framework which allows one (1) to calculate migration into foods under real conditions of use; and (2) to deliver realistic concentration estimates for consumer exposure modelling for complex packaging materials (including multi-material multilayer structures). The aim was to carry out within the framework of the FACET project a comprehensive systematic study on the solubility behaviour of foodstuffs for potentially migrating organic chemicals. Therefore a rapid and convenient method was established to obtain partition coefficients between polymer and food, KP/F. With this method approximately 700 time-dependent kinetic experiments from spiked polyethylene films were performed using model migrants, foods and ethanol-water mixtures. The partition coefficients of migrants between polymer and food (KP/F) were compared with those obtained using ethanol-water mixtures (KP/F's) to investigate whether an allocation of food groups with common migration behaviour to certain ethanol-water mixtures could be made. These studies have confirmed that the solubility of a migrant is mainly dependent on the fat content in the food and on the ethanol concentration of ethanol-water mixtures. Therefore dissolution properties of generic food groups for migrants can be assigned to those of ethanol-water mixtures. All foodstuffs (including dry foods) when allocated to FACET model food group codes can be classified into a reduced number of food categories each represented by a corresponding ethanol-water equivalency.

  20. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  1. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tie, S. S.; Martini, P.; Mudd, D.

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  2. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE PAGES

    Tie, S. S.; Martini, P.; Mudd, D.; ...

    2017-02-15

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  3. Leveraging probabilistic peak detection to estimate baseline drift in complex chromatographic samples.

    PubMed

    Lopatka, Martin; Barcaru, Andrei; Sjerps, Marjan J; Vivó-Truyols, Gabriel

    2016-01-29

    Accurate analysis of chromatographic data often requires the removal of baseline drift. A frequently employed strategy strives to determine asymmetric weights in order to fit a baseline model by regression. Unfortunately, chromatograms characterized by a very high peak saturation pose a significant challenge to such algorithms. In addition, a low signal-to-noise ratio (i.e. s/n<40) also adversely affects accurate baseline correction by asymmetrically weighted regression. We present a baseline estimation method that leverages a probabilistic peak detection algorithm. A posterior probability of being affected by a peak is computed for each point in the chromatogram, leading to a set of weights that allow non-iterative calculation of a baseline estimate. For extremely saturated chromatograms, the peak weighted (PW) method demonstrates notable improvement compared to the other methods examined. However, in chromatograms characterized by low-noise and well-resolved peaks, the asymmetric least squares (ALS) and the more sophisticated Mixture Model (MM) approaches achieve superior results in significantly less time. We evaluate the performance of these three baseline correction methods over a range of chromatographic conditions to demonstrate the cases in which each method is most appropriate. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks

    PubMed Central

    Vakanski, A; Ferguson, JM; Lee, S

    2016-01-01

    Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643

  5. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks.

    PubMed

    Vakanski, A; Ferguson, J M; Lee, S

    2016-12-01

    The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.

  6. Chained Bell Inequality Experiment with High-Efficiency Measurements

    NASA Astrophysics Data System (ADS)

    Tan, T. R.; Wan, Y.; Erickson, S.; Bierhorst, P.; Kienzler, D.; Glancy, S.; Knill, E.; Leibfried, D.; Wineland, D. J.

    2017-03-01

    We report correlation measurements on two 9Be+ ions that violate a chained Bell inequality obeyed by any local-realistic theory. The correlations can be modeled as derived from a mixture of a local-realistic probabilistic distribution and a distribution that violates the inequality. A statistical framework is formulated to quantify the local-realistic fraction allowable in the observed distribution without the fair-sampling or independent-and-identical-distributions assumptions. We exclude models of our experiment whose local-realistic fraction is above 0.327 at the 95% confidence level. This bound is significantly lower than 0.586, the minimum fraction derived from a perfect Clauser-Horne-Shimony-Holt inequality experiment. Furthermore, our data provide a device-independent certification of the deterministically created Bell states.

  7. TITAN: inference of copy number architectures in clonal cell populations from tumor whole-genome sequence data

    PubMed Central

    Roth, Andrew; Khattra, Jaswinder; Ho, Julie; Yap, Damian; Prentice, Leah M.; Melnyk, Nataliya; McPherson, Andrew; Bashashati, Ali; Laks, Emma; Biele, Justina; Ding, Jiarui; Le, Alan; Rosner, Jamie; Shumansky, Karey; Marra, Marco A.; Gilks, C. Blake; Huntsman, David G.; McAlpine, Jessica N.; Aparicio, Samuel

    2014-01-01

    The evolution of cancer genomes within a single tumor creates mixed cell populations with divergent somatic mutational landscapes. Inference of tumor subpopulations has been disproportionately focused on the assessment of somatic point mutations, whereas computational methods targeting evolutionary dynamics of copy number alterations (CNA) and loss of heterozygosity (LOH) in whole-genome sequencing data remain underdeveloped. We present a novel probabilistic model, TITAN, to infer CNA and LOH events while accounting for mixtures of cell populations, thereby estimating the proportion of cells harboring each event. We evaluate TITAN on idealized mixtures, simulating clonal populations from whole-genome sequences taken from genomically heterogeneous ovarian tumor sites collected from the same patient. In addition, we show in 23 whole genomes of breast tumors that the inference of CNA and LOH using TITAN critically informs population structure and the nature of the evolving cancer genome. Finally, we experimentally validated subclonal predictions using fluorescence in situ hybridization (FISH) and single-cell sequencing from an ovarian cancer patient sample, thereby recapitulating the key modeling assumptions of TITAN. PMID:25060187

  8. Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions.

    PubMed

    Nishino, Ko; Lombardi, Stephen

    2011-01-01

    We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.

  9. p-adic stochastic hidden variable model

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrew

    1998-03-01

    We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.

  10. Learning Probabilistic Logic Models from Probabilistic Examples

    PubMed Central

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2009-01-01

    Abstract We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples. PMID:19888348

  11. Learning Probabilistic Logic Models from Probabilistic Examples.

    PubMed

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  12. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  13. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  14. A Proposed Probabilistic Extension of the Halpern and Pearl Definition of ‘Actual Cause’

    PubMed Central

    2017-01-01

    ABSTRACT Joseph Halpern and Judea Pearl ([2005]) draw upon structural equation models to develop an attractive analysis of ‘actual cause’. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation. 1Introduction2Preemption3Structural Equation Models4The Halpern and Pearl Definition of ‘Actual Cause’5Preemption Again6The Probabilistic Case7Probabilistic Causal Models8A Proposed Probabilistic Extension of Halpern and Pearl’s Definition9Twardy and Korb’s Account10Probabilistic Fizzling11Conclusion PMID:29593362

  15. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    PubMed

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  16. ASSESSING THE QUANTITY AND QUALITY OF DEPRESSIONAL WETLANDS IN THE REDWOOD RIVER WATERSHED UTILIZING A PROBABILISTIC SURVEY DESIGN

    EPA Science Inventory

    Located in southwestern Minnesota, the Redwood River watershed lies within an area of the Prairie Pothole Region where land use is predominantly a mixture of row crop and livestock agriculture. In 2003 the condition of emergent depressional wetlands in this watershed was assesse...

  17. Identifying common donors in DNA mixtures, with applications to database searches.

    PubMed

    Slooten, K

    2017-01-01

    Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Global/local methods for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Wu, Y.-T.

    1993-01-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  19. Global/local methods for probabilistic structural analysis

    NASA Astrophysics Data System (ADS)

    Millwater, H. R.; Wu, Y.-T.

    1993-04-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  20. Gaussian mixture modeling of acoustic emissions for structural health monitoring of reinforced concrete structures

    NASA Astrophysics Data System (ADS)

    Farhidzadeh, Alireza; Dehghan-Niri, Ehsan; Salamone, Salvatore

    2013-04-01

    Reinforced Concrete (RC) has been widely used in construction of infrastructures for many decades. The cracking behavior in concrete is crucial due to the harmful effects on structural performance such as serviceability and durability requirements. In general, in loading such structures until failure, tensile cracks develop at the initial stages of loading, while shear cracks dominate later. Therefore, monitoring the cracking modes is of paramount importance as it can lead to the prediction of the structural performance. In the past two decades, significant efforts have been made toward the development of automated structural health monitoring (SHM) systems. Among them, a technique that shows promises for monitoring RC structures is the acoustic emission (AE). This paper introduces a novel probabilistic approach based on Gaussian Mixture Modeling (GMM) to classify AE signals related to each crack mode. The system provides an early warning by recognizing nucleation of numerous critical shear cracks. The algorithm is validated through an experimental study on a full-scale reinforced concrete shear wall subjected to a reversed cyclic loading. A modified conventional classification scheme and a new criterion for crack classification are also proposed.

  1. TITAN: inference of copy number architectures in clonal cell populations from tumor whole-genome sequence data.

    PubMed

    Ha, Gavin; Roth, Andrew; Khattra, Jaswinder; Ho, Julie; Yap, Damian; Prentice, Leah M; Melnyk, Nataliya; McPherson, Andrew; Bashashati, Ali; Laks, Emma; Biele, Justina; Ding, Jiarui; Le, Alan; Rosner, Jamie; Shumansky, Karey; Marra, Marco A; Gilks, C Blake; Huntsman, David G; McAlpine, Jessica N; Aparicio, Samuel; Shah, Sohrab P

    2014-11-01

    The evolution of cancer genomes within a single tumor creates mixed cell populations with divergent somatic mutational landscapes. Inference of tumor subpopulations has been disproportionately focused on the assessment of somatic point mutations, whereas computational methods targeting evolutionary dynamics of copy number alterations (CNA) and loss of heterozygosity (LOH) in whole-genome sequencing data remain underdeveloped. We present a novel probabilistic model, TITAN, to infer CNA and LOH events while accounting for mixtures of cell populations, thereby estimating the proportion of cells harboring each event. We evaluate TITAN on idealized mixtures, simulating clonal populations from whole-genome sequences taken from genomically heterogeneous ovarian tumor sites collected from the same patient. In addition, we show in 23 whole genomes of breast tumors that the inference of CNA and LOH using TITAN critically informs population structure and the nature of the evolving cancer genome. Finally, we experimentally validated subclonal predictions using fluorescence in situ hybridization (FISH) and single-cell sequencing from an ovarian cancer patient sample, thereby recapitulating the key modeling assumptions of TITAN. © 2014 Ha et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    ERIC Educational Resources Information Center

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  3. Mixture models for protein structure ensembles.

    PubMed

    Hirsch, Michael; Habeck, Michael

    2008-10-01

    Protein structure ensembles provide important insight into the dynamics and function of a protein and contain information that is not captured with a single static structure. However, it is not clear a priori to what extent the variability within an ensemble is caused by internal structural changes. Additional variability results from overall translations and rotations of the molecule. And most experimental data do not provide information to relate the structures to a common reference frame. To report meaningful values of intrinsic dynamics, structural precision, conformational entropy, etc., it is therefore important to disentangle local from global conformational heterogeneity. We consider the task of disentangling local from global heterogeneity as an inference problem. We use probabilistic methods to infer from the protein ensemble missing information on reference frames and stable conformational sub-states. To this end, we model a protein ensemble as a mixture of Gaussian probability distributions of either entire conformations or structural segments. We learn these models from a protein ensemble using the expectation-maximization algorithm. Our first model can be used to find multiple conformers in a structure ensemble. The second model partitions the protein chain into locally stable structural segments or core elements and less structured regions typically found in loops. Both models are simple to implement and contain only a single free parameter: the number of conformers or structural segments. Our models can be used to analyse experimental ensembles, molecular dynamics trajectories and conformational change in proteins. The Python source code for protein ensemble analysis is available from the authors upon request.

  4. Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.

    PubMed

    Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A

    2017-12-01

    Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.

  5. Probabilistic material degradation model for aerospace materials subjected to high temperature, mechanical and thermal fatigue, and creep

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1992-01-01

    A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.

  6. Reasoning in Reference Games: Individual- vs. Population-Level Probabilistic Modeling

    PubMed Central

    Franke, Michael; Degen, Judith

    2016-01-01

    Recent advances in probabilistic pragmatics have achieved considerable success in modeling speakers’ and listeners’ pragmatic reasoning as probabilistic inference. However, these models are usually applied to population-level data, and so implicitly suggest a homogeneous population without individual differences. Here we investigate potential individual differences in Theory-of-Mind related depth of pragmatic reasoning in so-called reference games that require drawing ad hoc Quantity implicatures of varying complexity. We show by Bayesian model comparison that a model that assumes a heterogenous population is a better predictor of our data, especially for comprehension. We discuss the implications for the treatment of individual differences in probabilistic models of language use. PMID:27149675

  7. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  8. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing.

    PubMed

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models.

  9. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing

    PubMed Central

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models. PMID:29062288

  10. Internal validation of STRmix™ - A multi laboratory response to PCAST.

    PubMed

    Bright, Jo-Anne; Richards, Rebecca; Kruijver, Maarten; Kelly, Hannah; McGovern, Catherine; Magee, Alan; McWhorter, Andrew; Ciecko, Anne; Peck, Brian; Baumgartner, Chase; Buettner, Christina; McWilliams, Scott; McKenna, Claire; Gallacher, Colin; Mallinder, Ben; Wright, Darren; Johnson, Deven; Catella, Dorothy; Lien, Eugene; O'Connor, Craig; Duncan, George; Bundy, Jason; Echard, Jillian; Lowe, John; Stewart, Joshua; Corrado, Kathleen; Gentile, Sheila; Kaplan, Marla; Hassler, Michelle; McDonald, Naomi; Hulme, Paul; Oefelein, Rachel H; Montpetit, Shawn; Strong, Melissa; Noël, Sarah; Malsom, Simon; Myers, Steven; Welti, Susan; Moretti, Tamyra; McMahon, Teresa; Grill, Thomas; Kalafut, Tim; Greer-Ritzheimer, MaryMargaret; Beamer, Vickie; Taylor, Duncan A; Buckleton, John S

    2018-05-01

    We report a large compilation of the internal validations of the probabilistic genotyping software STRmix™. Thirty one laboratories contributed data resulting in 2825 mixtures comprising three to six donors and a wide range of multiplex, equipment, mixture proportions and templates. Previously reported trends in the LR were confirmed including less discriminatory LRs occurring both for donors and non-donors at low template (for the donor in question) and at high contributor number. We were unable to isolate an effect of allelic sharing. Any apparent effect appears to be largely confounded with increased contributor number. Copyright © 2018. Published by Elsevier B.V.

  11. Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.

    PubMed

    Mørk, Søren; Holmes, Ian

    2012-03-01

    Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.

  12. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less

  13. Probabilistic structural analysis methods of hot engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Hopkins, D. A.

    1989-01-01

    Development of probabilistic structural analysis methods for hot engine structures is a major activity at Lewis Research Center. Recent activities have focused on extending the methods to include the combined uncertainties in several factors on structural response. This paper briefly describes recent progress on composite load spectra models, probabilistic finite element structural analysis, and probabilistic strength degradation modeling. Progress is described in terms of fundamental concepts, computer code development, and representative numerical results.

  14. Characterizing heterogeneous cellular responses to perturbations.

    PubMed

    Slack, Michael D; Martinez, Elisabeth D; Wu, Lani F; Altschuler, Steven J

    2008-12-09

    Cellular populations have been widely observed to respond heterogeneously to perturbation. However, interpreting the observed heterogeneity is an extremely challenging problem because of the complexity of possible cellular phenotypes, the large dimension of potential perturbations, and the lack of methods for separating meaningful biological information from noise. Here, we develop an image-based approach to characterize cellular phenotypes based on patterns of signaling marker colocalization. Heterogeneous cellular populations are characterized as mixtures of phenotypically distinct subpopulations, and responses to perturbations are summarized succinctly as probabilistic redistributions of these mixtures. We apply our method to characterize the heterogeneous responses of cancer cells to a panel of drugs. We find that cells treated with drugs of (dis-)similar mechanism exhibit (dis-)similar patterns of heterogeneity. Despite the observed phenotypic diversity of cells observed within our data, low-complexity models of heterogeneity were sufficient to distinguish most classes of drug mechanism. Our approach offers a computational framework for assessing the complexity of cellular heterogeneity, investigating the degree to which perturbations induce redistributions of a limited, but nontrivial, repertoire of underlying states and revealing functional significance contained within distinct patterns of heterogeneous responses.

  15. Application of Probabilistic Analysis to Aircraft Impact Dynamics

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.

  16. Age group classification and gender detection based on forced expiratory spirometry.

    PubMed

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  17. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  18. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  19. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  20. Probabilistic machine learning and artificial intelligence.

    PubMed

    Ghahramani, Zoubin

    2015-05-28

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  1. Probabilistic machine learning and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Ghahramani, Zoubin

    2015-05-01

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  2. Process for computing geometric perturbations for probabilistic analysis

    DOEpatents

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  3. Probabilistic modeling of discourse-aware sentence processing.

    PubMed

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  4. E-Area LLWF Vadose Zone Model: Probabilistic Model for Estimating Subsided-Area Infiltration Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyer, J.; Flach, G.

    A probabilistic model employing a Monte Carlo sampling technique was developed in Python to generate statistical distributions of the upslope-intact-area to subsided-area ratio (Area UAi/Area SAi) for closure cap subsidence scenarios that differ in assumed percent subsidence and the total number of intact plus subsided compartments. The plan is to use this model as a component in the probabilistic system model for the E-Area Performance Assessment (PA), contributing uncertainty in infiltration estimates.

  5. Spark Ignition of Monodisperse Fuel Sprays. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Danis, Allen M.; Cernansky, Nicholas P.; Namer, Izak

    1987-01-01

    A study of spark ignition energy requirements was conducted with a monodisperse spray system allowing independent control of droplet size, equivalent ratio, and fuel type. Minimum ignition energies were measured for n-heptane and methanol sprays characterized at the spark gap in terms of droplet diameter, equivalence ratio (number density) and extent of prevaporization. In addition to sprays, minimum ignition energies were measured for completely prevaporized mixtures of the same fuels over a range of equivalence ratios to provide data at the lower limit of droplet size. Results showed that spray ignition was enhanced with decreasing droplet size and increasing equivalence ratio over the ranges of the parameters studied. By comparing spray and prevaporized ignition results, the existence of an optimum droplet size for ignition was indicated for both fuels. Fuel volatility was seen to be a critical factor in spray ignition. The spray ignition results were analyzed using two different empirical ignition models for quiescent mixtures. Both models accurately predicted the experimental ignition energies for the majority of the spray conditions. Spray ignition was observed to be probabilistic in nature, and ignition was quantified in terms of an ignition frequency for a given spark energy. A model was developed to predict ignition frequencies based on the variation in spark energy and equivalence ratio in the spark gap. The resulting ignition frequency simulations were nearly identical to the experimentally observed values.

  6. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  7. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    PubMed

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  8. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  9. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more

    PubMed Central

    Rivas, Elena; Lang, Raymond; Eddy, Sean R.

    2012-01-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  10. A probabilistic NF2 relational algebra for integrated information retrieval and database systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuhr, N.; Roelleke, T.

    The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less

  11. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.

  12. Learning coefficient of generalization error in Bayesian estimation and vandermonde matrix-type singularity.

    PubMed

    Aoyagi, Miki; Nagata, Kenji

    2012-06-01

    The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.

  13. Modeling semantic aspects for cross-media image indexing.

    PubMed

    Monay, Florent; Gatica-Perez, Daniel

    2007-10-01

    To go beyond the query-by-example paradigm in image retrieval, there is a need for semantic indexing of large image collections for intuitive text-based image search. Different models have been proposed to learn the dependencies between the visual content of an image set and the associated text captions, then allowing for the automatic creation of semantic indices for unannotated images. The task, however, remains unsolved. In this paper, we present three alternatives to learn a Probabilistic Latent Semantic Analysis model (PLSA) for annotated images, and evaluate their respective performance for automatic image indexing. Under the PLSA assumptions, an image is modeled as a mixture of latent aspects that generates both image features and text captions, and we investigate three ways to learn the mixture of aspects. We also propose a more discriminative image representation than the traditional Blob histogram, concatenating quantized local color information and quantized local texture descriptors. The first learning procedure of a PLSA model for annotated images is a standard EM algorithm, which implicitly assumes that the visual and the textual modalities can be treated equivalently. The other two models are based on an asymmetric PLSA learning, allowing to constrain the definition of the latent space on the visual or on the textual modality. We demonstrate that the textual modality is more appropriate to learn a semantically meaningful latent space, which translates into improved annotation performance. A comparison of our learning algorithms with respect to recent methods on a standard dataset is presented, and a detailed evaluation of the performance shows the validity of our framework.

  14. A probabilistic and continuous model of protein conformational space for template-free modeling.

    PubMed

    Zhao, Feng; Peng, Jian; Debartolo, Joe; Freed, Karl F; Sosnick, Tobin R; Xu, Jinbo

    2010-06-01

    One of the major challenges with protein template-free modeling is an efficient sampling algorithm that can explore a huge conformation space quickly. The popular fragment assembly method constructs a conformation by stringing together short fragments extracted from the Protein Data Base (PDB). The discrete nature of this method may limit generated conformations to a subspace in which the native fold does not belong. Another worry is that a protein with really new fold may contain some fragments not in the PDB. This article presents a probabilistic model of protein conformational space to overcome the above two limitations. This probabilistic model employs directional statistics to model the distribution of backbone angles and 2(nd)-order Conditional Random Fields (CRFs) to describe sequence-angle relationship. Using this probabilistic model, we can sample protein conformations in a continuous space, as opposed to the widely used fragment assembly and lattice model methods that work in a discrete space. We show that when coupled with a simple energy function, this probabilistic method compares favorably with the fragment assembly method in the blind CASP8 evaluation, especially on alpha or small beta proteins. To our knowledge, this is the first probabilistic method that can search conformations in a continuous space and achieves favorable performance. Our method also generated three-dimensional (3D) models better than template-based methods for a couple of CASP8 hard targets. The method described in this article can also be applied to protein loop modeling, model refinement, and even RNA tertiary structure prediction.

  15. A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    PubMed Central

    Eddy, Sean R.

    2008-01-01

    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236

  16. Fuzzy-probabilistic model for risk assessment of radioactive material railway transportation.

    PubMed

    Avramenko, M; Bolyatko, V; Kosterev, V

    2005-01-01

    Transportation of radioactive materials is obviously accompanied by a certain risk. A model for risk assessment of emergency situations and terrorist attacks may be useful for choosing possible routes and for comparing the various defence strategies. In particular, risk assessment is crucial for safe transportation of excess weapons-grade plutonium arising from the removal of plutonium from military employment. A fuzzy-probabilistic model for risk assessment of railway transportation has been developed taking into account the different natures of risk-affecting parameters (probabilistic and not probabilistic but fuzzy). Fuzzy set theory methods as well as standard methods of probability theory have been used for quantitative risk assessment. Information-preserving transformations are applied to realise the correct aggregation of probabilistic and fuzzy parameters. Estimations have also been made of the inhalation doses resulting from possible accidents during plutonium transportation. The obtained data show the scale of possible consequences that may arise from plutonium transportation accidents.

  17. What do we gain with Probabilistic Flood Loss Models?

    NASA Astrophysics Data System (ADS)

    Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.

    2015-12-01

    The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  18. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  19. A comparison of machine learning and Bayesian modelling for molecular serotyping.

    PubMed

    Newton, Richard; Wernisch, Lorenz

    2017-08-11

    Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological insights, which we illustrate with an example.

  20. Probabilistic Fatigue Damage Prognosis Using a Surrogate Model Trained Via 3D Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo

    2015-01-01

    Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.

  1. Model of Mixing Layer With Multicomponent Evaporating Drops

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Le Clercq, Patrick

    2004-01-01

    A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas. Drop evaporation leads to a change in the gas-phase composition, which, like the composition of the drops, is described in a probabilistic manner

  2. Probabilistic models of cognition: conceptual foundations.

    PubMed

    Chater, Nick; Tenenbaum, Joshua B; Yuille, Alan

    2006-07-01

    Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, 'sophisticated' probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today.

  3. Development of Probabilistic Flood Inundation Mapping For Flooding Induced by Dam Failure

    NASA Astrophysics Data System (ADS)

    Tsai, C.; Yeh, J. J. J.

    2017-12-01

    A primary function of flood inundation mapping is to forecast flood hazards and assess potential losses. However, uncertainties limit the reliability of inundation hazard assessments. Major sources of uncertainty should be taken into consideration by an optimal flood management strategy. This study focuses on the 20km reach downstream of the Shihmen Reservoir in Taiwan. A dam failure induced flood herein provides the upstream boundary conditions of flood routing. The two major sources of uncertainty that are considered in the hydraulic model and the flood inundation mapping herein are uncertainties in the dam break model and uncertainty of the roughness coefficient. The perturbance moment method is applied to a dam break model and the hydro system model to develop probabilistic flood inundation mapping. Various numbers of uncertain variables can be considered in these models and the variability of outputs can be quantified. The probabilistic flood inundation mapping for dam break induced floods can be developed with consideration of the variability of output using a commonly used HEC-RAS model. Different probabilistic flood inundation mappings are discussed and compared. Probabilistic flood inundation mappings are hoped to provide new physical insights in support of the evaluation of concerning reservoir flooded areas.

  4. Automated segmentation of the prostate in 3D MR images using a probabilistic atlas and a spatially constrained deformable model.

    PubMed

    Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent

    2010-04-01

    The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.

  5. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  6. Generation of a mixture model ground-motion prediction equation for Northern Chile

    NASA Astrophysics Data System (ADS)

    Haendel, A.; Kuehn, N. M.; Scherbaum, F.

    2012-12-01

    In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from observed ground motion data in the region of interest) is then transferring information from other regions to the region where the observations have been produced in a data driven way. The backbone model is learned by comparing the model predictions to observations of the target region. For each observation and each model, the likelihood of an observation given a certain GMPE is calculated. Mixture weights can then be assigned using the expectation maximization (EM) algorithm or Bayesian inference. The new method is used to generate a backbone reference model for Northern Chile, an area for which no dedicated GMPE exists. Strong motion recordings from the target area are used to learn the backbone model from a set of 10 GMPEs developed for different subduction zones of the world. The formation of mixture models is done individually for interface and intraslab type events. The ability of the resulting backbone models to describe ground motions in Northern Chile is then compared to the predictive performance of their constituent models.

  7. Application of a stochastic snowmelt model for probabilistic decisionmaking

    NASA Technical Reports Server (NTRS)

    Mccuen, R. H.

    1983-01-01

    A stochastic form of the snowmelt runoff model that can be used for probabilistic decision-making was developed. The use of probabilistic streamflow predictions instead of single valued deterministic predictions leads to greater accuracy in decisions. While the accuracy of the output function is important in decisionmaking, it is also important to understand the relative importance of the coefficients. Therefore, a sensitivity analysis was made for each of the coefficients.

  8. Vagueness as Probabilistic Linguistic Knowledge

    NASA Astrophysics Data System (ADS)

    Lassiter, Daniel

    Consideration of the metalinguistic effects of utterances involving vague terms has led Barker [1] to treat vagueness using a modified Stalnakerian model of assertion. I present a sorites-like puzzle for factual beliefs in the standard Stalnakerian model [28] and show that it can be resolved by enriching the model to make use of probabilistic belief spaces. An analogous problem arises for metalinguistic information in Barker's model, and I suggest that a similar enrichment is needed here as well. The result is a probabilistic theory of linguistic representation that retains a classical metalanguage but avoids the undesirable divorce between meaning and use inherent in the epistemic theory [34]. I also show that the probabilistic approach provides a plausible account of the sorites paradox and higher-order vagueness and that it fares well empirically and conceptually in comparison to leading competitors.

  9. Topics in Computational Bayesian Statistics With Applications to Hierarchical Models in Astronomy and Sociology

    NASA Astrophysics Data System (ADS)

    Sahai, Swupnil

    This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.

  10. Probalistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Xapsos, Michael

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to describe the radiation environment that can be expected at a specified confidence level. The task of the designer is then to choose a design that will operate in the model radiation environment. Probabilistic models have already been developed for solar proton events that describe the peak flux, event-integrated fluence and missionintegrated fluence. In addition a probabilistic model has been developed that describes the mission-integrated fluence for the Z>2 elemental spectra. This talk will focus on completing this suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 element

  11. EFFECTS OF CORRELATED PROBABILISTIC EXPOSURE MODEL INPUTS ON SIMULATED RESULTS

    EPA Science Inventory

    In recent years, more probabilistic models have been developed to quantify aggregate human exposures to environmental pollutants. The impact of correlation among inputs in these models is an important issue, which has not been resolved. Obtaining correlated data and implementi...

  12. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  13. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  14. Use of limited data to construct Bayesian networks for probabilistic risk assessment.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina M.; Swiler, Laura Painton

    2013-03-01

    Probabilistic Risk Assessment (PRA) is a fundamental part of safety/quality assurance for nuclear power and nuclear weapons. Traditional PRA very effectively models complex hardware system risks using binary probabilistic models. However, traditional PRA models are not flexible enough to accommodate non-binary soft-causal factors, such as digital instrumentation&control, passive components, aging, common cause failure, and human errors. Bayesian Networks offer the opportunity to incorporate these risks into the PRA framework. This report describes the results of an early career LDRD project titled %E2%80%9CUse of Limited Data to Construct Bayesian Networks for Probabilistic Risk Assessment%E2%80%9D. The goal of the work was tomore » establish the capability to develop Bayesian Networks from sparse data, and to demonstrate this capability by producing a data-informed Bayesian Network for use in Human Reliability Analysis (HRA) as part of nuclear power plant Probabilistic Risk Assessment (PRA). This report summarizes the research goal and major products of the research.« less

  15. Probabilistic drug connectivity mapping

    PubMed Central

    2014-01-01

    Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351

  16. Topological mappings of video and audio data.

    PubMed

    Fyfe, Colin; Barbakh, Wesam; Ooi, Wei Chuan; Ko, Hanseok

    2008-12-01

    We review a new form of self-organizing map which is based on a nonlinear projection of latent points into data space, identical to that performed in the Generative Topographic Mapping (GTM).(1) But whereas the GTM is an extension of a mixture of experts, this model is an extension of a product of experts.(2) We show visualisation and clustering results on a data set composed of video data of lips uttering 5 Korean vowels. Finally we note that we may dispense with the probabilistic underpinnings of the product of experts and derive the same algorithm as a minimisation of mean squared error between the prototypes and the data. This leads us to suggest a new algorithm which incorporates local and global information in the clustering. Both ot the new algorithms achieve better results than the standard Self-Organizing Map.

  17. Probabilistic Approach to Conditional Probability of Release of Hazardous Materials from Railroad Tank Cars during Accidents

    DOT National Transportation Integrated Search

    2009-10-13

    This paper describes a probabilistic approach to estimate the conditional probability of release of hazardous materials from railroad tank cars during train accidents. Monte Carlo methods are used in developing a probabilistic model to simulate head ...

  18. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  19. Superposition-Based Analysis of First-Order Probabilistic Timed Automata

    NASA Astrophysics Data System (ADS)

    Fietzke, Arnaud; Hermanns, Holger; Weidenbach, Christoph

    This paper discusses the analysis of first-order probabilistic timed automata (FPTA) by a combination of hierarchic first-order superposition-based theorem proving and probabilistic model checking. We develop the overall semantics of FPTAs and prove soundness and completeness of our method for reachability properties. Basically, we decompose FPTAs into their time plus first-order logic aspects on the one hand, and their probabilistic aspects on the other hand. Then we exploit the time plus first-order behavior by hierarchic superposition over linear arithmetic. The result of this analysis is the basis for the construction of a reachability equivalent (to the original FPTA) probabilistic timed automaton to which probabilistic model checking is finally applied. The hierarchic superposition calculus required for the analysis is sound and complete on the first-order formulas generated from FPTAs. It even works well in practice. We illustrate the potential behind it with a real-life DHCP protocol example, which we analyze by means of tool chain support.

  20. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    PubMed

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Solving probability reasoning based on DNA strand displacement and probability modules.

    PubMed

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Effect of time dependence on probabilistic seismic-hazard maps and deaggregation for the central Apennines, Italy

    USGS Publications Warehouse

    Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.

    2009-01-01

    We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.

  3. Error Discounting in Probabilistic Category Learning

    PubMed Central

    Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.

    2011-01-01

    Some current theories of probabilistic categorization assume that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report two probabilistic-categorization experiments that investigated error discounting by shifting feedback probabilities to new values after different amounts of training. In both experiments, responding gradually became less responsive to errors, and learning was slowed for some time after the feedback shift. Both results are indicative of error discounting. Quantitative modeling of the data revealed that adding a mechanism for error discounting significantly improved the fits of an exemplar-based and a rule-based associative learning model, as well as of a recency-based model of categorization. We conclude that error discounting is an important component of probabilistic learning. PMID:21355666

  4. Probabilistic interpretation of Peelle's pertinent puzzle and its resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Kenneth M.; Kawano, T.; Talou, P.

    2004-01-01

    Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less

  5. Probabilistic Interpretation of Peelle's Pertinent Puzzle and its Resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Kenneth M.; Kawano, Toshihiko; Talou, Patrick

    2005-05-24

    Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less

  6. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  7. The analysis of the possibility of using 10-minute rainfall series to determine the maximum rainfall amount with 5 minutes duration

    NASA Astrophysics Data System (ADS)

    Kaźmierczak, Bartosz; Wartalska, Katarzyna; Wdowikowski, Marcin; Kotowski, Andrzej

    2017-11-01

    Modern scientific research in the area of heavy rainfall analysis regarding to the sewerage design indicates the need to develop and use probabilistic rain models. One of the issues that remains to be resolved is the length of the shortest amount of rain to be analyzed. It is commonly believed that the best time is 5 minutes, while the least rain duration measured by the national services is often 10 or even 15 minutes. Main aim of this paper is to present the difference between probabilistic rainfall models results given from rainfall time series including and excluding 5 minutes rainfall duration. Analysis were made for long-time period from 1961-2010 on polish meteorological station Legnica. To develop best fitted to measurement rainfall data probabilistic model 4 probabilistic distributions were used. Results clearly indicates that models including 5 minutes rainfall duration remains more appropriate to use.

  8. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  9. A Re-Unification of Two Competing Models for Document Retrieval.

    ERIC Educational Resources Information Center

    Bodoff, David

    1999-01-01

    Examines query-oriented versus document-oriented information retrieval and feedback learning. Highlights include a reunification of the two approaches for probabilistic document retrieval and for vector space model (VSM) retrieval; learning in VSM and in probabilistic models; multi-dimensional scaling; and ongoing field studies. (LRW)

  10. A PROBABILISTIC POPULATION EXPOSURE MODEL FOR PM10 AND PM 2.5

    EPA Science Inventory

    A first generation probabilistic population exposure model for Particulate Matter (PM), specifically for predicting PM10, and PM2.5, exposures of an urban, population has been developed. This model is intended to be used to predict exposure (magnitude, frequency, and duration) ...

  11. Generalized species sampling priors with latent Beta reinforcements

    PubMed Central

    Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele

    2014-01-01

    Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462

  12. Subsea release of oil from a riser: an ecological risk assessment.

    PubMed

    Nazir, Muddassir; Khan, Faisal; Amyotte, Paul; Sadiq, Rehan

    2008-10-01

    This study illustrates a newly developed methodology, as a part of the U.S. EPA ecological risk assessment (ERA) framework, to predict exposure concentrations in a marine environment due to underwater release of oil and gas. It combines the hydrodynamics of underwater blowout, weathering algorithms, and multimedia fate and transport to measure the exposure concentration. Naphthalene and methane are used as surrogate compounds for oil and gas, respectively. Uncertainties are accounted for in multimedia input parameters in the analysis. The 95th percentile of the exposure concentration (EC(95%)) is taken as the representative exposure concentration for the risk estimation. A bootstrapping method is utilized to characterize EC(95%) and associated uncertainty. The toxicity data of 19 species available in the literature are used to calculate the 5th percentile of the predicted no observed effect concentration (PNEC(5%)) by employing the bootstrapping method. The risk is characterized by transforming the risk quotient (RQ), which is the ratio of EC(95%) to PNEC(5%), into a cumulative risk distribution. This article describes a probabilistic basis for the ERA, which is essential from risk management and decision-making viewpoints. Two case studies of underwater oil and gas mixture release, and oil release with no gaseous mixture are used to show the systematic implementation of the methodology, elements of ERA, and the probabilistic method in assessing and characterizing the risk.

  13. The probabilistic nature of preferential choice.

    PubMed

    Rieskamp, Jörg

    2008-11-01

    Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes different choices in nearly identical situations, or why the magnitude of these inconsistencies varies in different situations. To illustrate the advantage of probabilistic theories, three probabilistic theories of decision making under risk are compared with their deterministic counterparts. The probabilistic theories are (a) a probabilistic version of a simple choice heuristic, (b) a probabilistic version of cumulative prospect theory, and (c) decision field theory. By testing the theories with the data from three experimental studies, the superiority of the probabilistic models over their deterministic counterparts in predicting people's decisions under risk become evident. When testing the probabilistic theories against each other, decision field theory provides the best account of the observed behavior.

  14. Probabilistic Prediction of Lifetimes of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.

    2006-01-01

    ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.

  15. Probabilistic structural analysis of aerospace components using NESSUS

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  16. Cognitive Development Effects of Teaching Probabilistic Decision Making to Middle School Students

    ERIC Educational Resources Information Center

    Mjelde, James W.; Litzenberg, Kerry K.; Lindner, James R.

    2011-01-01

    This study investigated the comprehension and effectiveness of teaching formal, probabilistic decision-making skills to middle school students. Two specific objectives were to determine (1) if middle school students can comprehend a probabilistic decision-making approach, and (2) if exposure to the modeling approaches improves middle school…

  17. The Terrestrial Investigation Model: A probabilistic risk assessment model for birds exposed to pesticides

    EPA Science Inventory

    One of the major recommendations of the National Academy of Science to the USEPA, NMFS and USFWS was to utilize probabilistic methods when assessing the risks of pesticides to federally listed endangered and threatened species. The Terrestrial Investigation Model (TIM, version 3....

  18. Exploring Term Dependences in Probabilistic Information Retrieval Model.

    ERIC Educational Resources Information Center

    Cho, Bong-Hyun; Lee, Changki; Lee, Gary Geunbae

    2003-01-01

    Describes a theoretic process to apply Bahadur-Lazarsfeld expansion (BLE) to general probabilistic models and the state-of-the-art 2-Poisson model. Through experiments on two standard document collections, one in Korean and one in English, it is demonstrated that incorporation of term dependences using BLE significantly contributes to performance…

  19. A Probabilistic Model of Phonological Relationships from Contrast to Allophony

    ERIC Educational Resources Information Center

    Hall, Kathleen Currie

    2009-01-01

    This dissertation proposes a model of phonological relationships, the Probabilistic Phonological Relationship Model (PPRM), that quantifies how predictably distributed two sounds in a relationship are. It builds on a core premise of traditional phonological analysis, that the ability to define phonological relationships such as contrast and…

  20. PubMed related articles: a probabilistic topic-based model for content similarity

    PubMed Central

    Lin, Jimmy; Wilbur, W John

    2007-01-01

    Background We present a probabilistic topic-based model for content similarity called pmra that underlies the related article search feature in PubMed. Whether or not a document is about a particular topic is computed from term frequencies, modeled as Poisson distributions. Unlike previous probabilistic retrieval models, we do not attempt to estimate relevance–but rather our focus is "relatedness", the probability that a user would want to examine a particular document given known interest in another. We also describe a novel technique for estimating parameters that does not require human relevance judgments; instead, the process is based on the existence of MeSH ® in MEDLINE ®. Results The pmra retrieval model was compared against bm25, a competitive probabilistic model that shares theoretical similarities. Experiments using the test collection from the TREC 2005 genomics track shows a small but statistically significant improvement of pmra over bm25 in terms of precision. Conclusion Our experiments suggest that the pmra model provides an effective ranking algorithm for related article search. PMID:17971238

  1. Toward a Probabilistic Phenological Model for Wheat Growing Degree Days (GDD)

    NASA Astrophysics Data System (ADS)

    Rahmani, E.; Hense, A.

    2017-12-01

    Are there deterministic relations between phenological and climate parameters? The answer is surely `No'. This answer motivated us to solve the problem through probabilistic theories. Thus, we developed a probabilistic phenological model which has the advantage of giving additional information in terms of uncertainty. To that aim, we turned to a statistical analysis named survival analysis. Survival analysis deals with death in biological organisms and failure in mechanical systems. In survival analysis literature, death or failure is considered as an event. By event, in this research we mean ripening date of wheat. We will assume only one event in this special case. By time, we mean the growing duration from sowing to ripening as lifetime for wheat which is a function of GDD. To be more precise we will try to perform the probabilistic forecast for wheat ripening. The probability value will change between 0 and 1. Here, the survivor function gives the probability that the not ripened wheat survives longer than a specific time or will survive to the end of its lifetime as a ripened crop. The survival function at each station is determined by fitting a normal distribution to the GDD as the function of growth duration. Verification of the models obtained is done using CRPS skill score (CRPSS). The positive values of CRPSS indicate the large superiority of the probabilistic phonologic survival model to the deterministic models. These results demonstrate that considering uncertainties in modeling are beneficial, meaningful and necessary. We believe that probabilistic phenological models have the potential to help reduce the vulnerability of agricultural production systems to climate change thereby increasing food security.

  2. A probabilistic seismic model for the European Arctic

    NASA Astrophysics Data System (ADS)

    Hauser, Juerg; Dyer, Kathleen M.; Pasyanos, Michael E.; Bungum, Hilmar; Faleide, Jan I.; Clark, Stephen A.; Schweitzer, Johannes

    2011-01-01

    The development of three-dimensional seismic models for the crust and upper mantle has traditionally focused on finding one model that provides the best fit to the data while observing some regularization constraints. In contrast to this, the inversion employed here fits the data in a probabilistic sense and thus provides a quantitative measure of model uncertainty. Our probabilistic model is based on two sources of information: (1) prior information, which is independent from the data, and (2) different geophysical data sets, including thickness constraints, velocity profiles, gravity data, surface wave group velocities, and regional body wave traveltimes. We use a Markov chain Monte Carlo (MCMC) algorithm to sample models from the prior distribution, the set of plausible models, and test them against the data to generate the posterior distribution, the ensemble of models that fit the data with assigned uncertainties. While being computationally more expensive, such a probabilistic inversion provides a more complete picture of solution space and allows us to combine various data sets. The complex geology of the European Arctic, encompassing oceanic crust, continental shelf regions, rift basins and old cratonic crust, as well as the nonuniform coverage of the region by data with varying degrees of uncertainty, makes it a challenging setting for any imaging technique and, therefore, an ideal environment for demonstrating the practical advantages of a probabilistic approach. Maps of depth to basement and depth to Moho derived from the posterior distribution are in good agreement with previously published maps and interpretations of the regional tectonic setting. The predicted uncertainties, which are as important as the absolute values, correlate well with the variations in data coverage and quality in the region. A practical advantage of our probabilistic model is that it can provide estimates for the uncertainties of observables due to model uncertainties. We will demonstrate how this can be used for the formulation of earthquake location algorithms that take model uncertainties into account when estimating location uncertainties.

  3. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.

    PubMed

    Pecevski, Dejan; Maass, Wolfgang

    2016-01-01

    Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.

  4. Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123

    PubMed Central

    Pecevski, Dejan

    2016-01-01

    Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214

  5. The Gain-Loss Model: A Probabilistic Skill Multimap Model for Assessing Learning Processes

    ERIC Educational Resources Information Center

    Robusto, Egidio; Stefanutti, Luca; Anselmi, Pasquale

    2010-01-01

    Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…

  6. A PROBABILISTIC MODELING FRAMEWORK FOR PREDICTING POPULATION EXPOSURES TO BENZENE

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) is modifying their probabilistic Stochastic Human Exposure Dose Simulation (SHEDS) model to assess aggregate exposures to air toxics. Air toxics include urban Hazardous Air Pollutants (HAPS) such as benzene from mobile sources, part...

  7. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  8. From cyclone tracks to the costs of European winter storms: A probabilistic loss assessment model

    NASA Astrophysics Data System (ADS)

    Renggli, Dominik; Corti, Thierry; Reese, Stefan; Wueest, Marc; Viktor, Elisabeth; Zimmerli, Peter

    2014-05-01

    The quantitative assessment of the potential losses of European winter storms is essential for the economic viability of a global reinsurance company. For this purpose, reinsurance companies generally use probabilistic loss assessment models. This work presents an innovative approach to develop physically meaningful probabilistic events for Swiss Re's new European winter storm loss model. The meteorological hazard component of the new model is based on cyclone and windstorm tracks identified in the 20th Century Reanalysis data. The knowledge of the evolution of winter storms both in time and space allows the physically meaningful perturbation of properties of historical events (e.g. track, intensity). The perturbation includes a random element but also takes the local climatology and the evolution of the historical event into account. The low-resolution wind footprints taken from 20th Century Reanalysis are processed by a statistical-dynamical downscaling to generate high-resolution footprints of the historical and probabilistic winter storm events. Downscaling transfer functions are generated using ENSEMBLES regional climate model data. The result is a set of reliable probabilistic events representing thousands of years. The event set is then combined with country- and risk-specific vulnerability functions and detailed market- or client-specific exposure information to compute (re-)insurance risk premiums.

  9. Probabilistic Usage of the Multi-Factor Interaction Model

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A Multi-Factor Interaction Model (MFIM) is used to predict the insulating foam mass expulsion during the ascending of a space vehicle. The exponents in the MFIM are evaluated by an available approach which consists of least squares and an optimization algorithm. These results were subsequently used to probabilistically evaluate the effects of the uncertainties in each participating factor in the mass expulsion. The probabilistic results show that the surface temperature dominates at high probabilities and the pressure which causes the mass expulsion at low probabil

  10. Integrated presentation of ecological risk from multiple stressors

    NASA Astrophysics Data System (ADS)

    Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman

    2016-10-01

    Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.

  11. Integrated presentation of ecological risk from multiple stressors.

    PubMed

    Goussen, Benoit; Price, Oliver R; Rendal, Cecilie; Ashauer, Roman

    2016-10-26

    Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.

  12. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory

    ERIC Educational Resources Information Center

    Gopnik, Alison; Wellman, Henry M.

    2012-01-01

    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…

  14. QUANTIFYING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN USING A PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL

    EPA Science Inventory

    To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...

  15. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  16. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  17. Building a high-resolution T2-weighted MR-based probabilistic model of tumor occurrence in the prostate.

    PubMed

    Nagarajan, Mahesh B; Raman, Steven S; Lo, Pechin; Lin, Wei-Chan; Khoshnoodi, Pooria; Sayre, James W; Ramakrishna, Bharath; Ahuja, Preeti; Huang, Jiaoti; Margolis, Daniel J A; Lu, David S K; Reiter, Robert E; Goldin, Jonathan G; Brown, Matthew S; Enzmann, Dieter R

    2018-02-19

    We present a method for generating a T2 MR-based probabilistic model of tumor occurrence in the prostate to guide the selection of anatomical sites for targeted biopsies and serve as a diagnostic tool to aid radiological evaluation of prostate cancer. In our study, the prostate and any radiological findings within were segmented retrospectively on 3D T2-weighted MR images of 266 subjects who underwent radical prostatectomy. Subsequent histopathological analysis determined both the ground truth and the Gleason grade of the tumors. A randomly chosen subset of 19 subjects was used to generate a multi-subject-derived prostate template. Subsequently, a cascading registration algorithm involving both affine and non-rigid B-spline transforms was used to register the prostate of every subject to the template. Corresponding transformation of radiological findings yielded a population-based probabilistic model of tumor occurrence. The quality of our probabilistic model building approach was statistically evaluated by measuring the proportion of correct placements of tumors in the prostate template, i.e., the number of tumors that maintained their anatomical location within the prostate after their transformation into the prostate template space. Probabilistic model built with tumors deemed clinically significant demonstrated a heterogeneous distribution of tumors, with higher likelihood of tumor occurrence at the mid-gland anterior transition zone and the base-to-mid-gland posterior peripheral zones. Of 250 MR lesions analyzed, 248 maintained their original anatomical location with respect to the prostate zones after transformation to the prostate. We present a robust method for generating a probabilistic model of tumor occurrence in the prostate that could aid clinical decision making, such as selection of anatomical sites for MR-guided prostate biopsies.

  18. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Treesearch

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  19. Satellite Based Probabilistic Snow Cover Extent Mapping (SCE) at Hydro-Québec

    NASA Astrophysics Data System (ADS)

    Teasdale, Mylène; De Sève, Danielle; Angers, Jean-François; Perreault, Luc

    2016-04-01

    Over 40% of Canada's water resources are in Quebec and Hydro-Quebec has developed potential to become one of the largest producers of hydroelectricity in the world, with a total installed capacity of 36,643 MW. The Hydro-Québec fleet park includes 27 large reservoirs with a combined storage capacity of 176 TWh, and 668 dams and 98 controls. Thus, over 98% of all electricity used to supply the domestic market comes from water resources and the excess output is sold on the wholesale markets. In this perspective the efficient management of water resources is needed and it is based primarily on a good river flow estimation including appropriate hydrological data. Snow on ground is one of the significant variables representing 30% to 40% of its annual energy reserve. More specifically, information on snow cover extent (SCE) and snow water equivalent (SWE) is crucial for hydrological forecasting, particularly in northern regions since the snowmelt provides the water that fills the reservoirs and is subsequently used for hydropower generation. For several years Hydro Quebec's research institute ( IREQ) developed several algorithms to map SCE and SWE. So far all the methods were deterministic. However, given the need to maximize the efficient use of all resources while ensuring reliability, the electrical systems must now be managed taking into account all risks. Since snow cover estimation is based on limited spatial information, it is important to quantify and handle its uncertainty in the hydrological forecasting system. This paper presents the first results of a probabilistic algorithm for mapping SCE by combining Bayesian mixture of probability distributions and multiple logistic regression models applied to passive microwave data. This approach allows assigning for each grid point, probabilities to the set of the mutually exclusive discrete outcomes: "snow" and "no snow". Its performance was evaluated using the Brier score since it is particularly appropriate to measure the accuracy of probabilistic discrete predictions. The scores were measured by comparing the snow probabilities produced by our models with the Hydro-Québec's snow ground data.

  20. Modelling default and likelihood reasoning as probabilistic reasoning

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  1. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  2. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    PubMed

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks.

    PubMed

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.

  4. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks

    PubMed Central

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110

  5. Identification of hybrid node and link communities in complex networks

    PubMed Central

    He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong

    2015-01-01

    Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately. PMID:25728010

  6. Identification of hybrid node and link communities in complex networks.

    PubMed

    He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong

    2015-03-02

    Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.

  7. Identification of hybrid node and link communities in complex networks

    NASA Astrophysics Data System (ADS)

    He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong

    2015-03-01

    Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.

  8. A generative, probabilistic model of local protein structure.

    PubMed

    Boomsma, Wouter; Mardia, Kanti V; Taylor, Charles C; Ferkinghoff-Borg, Jesper; Krogh, Anders; Hamelryck, Thomas

    2008-07-01

    Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state. Our method represents a significant theoretical and practical improvement over the widely used fragment assembly technique by avoiding the drawbacks associated with a discrete and nonprobabilistic approach.

  9. Decomposing biodiversity data using the Latent Dirichlet Allocation model, a probabilistic multivariate statistical method

    Treesearch

    Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave

    2014-01-01

    We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...

  10. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  11. Model fitting data from syllogistic reasoning experiments.

    PubMed

    Hattori, Masasi

    2016-12-01

    The data presented in this article are related to the research article entitled "Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics" (M. Hattori, 2016) [1]. This article presents predicted data by three signature probabilistic models of syllogistic reasoning and model fitting results for each of a total of 12 experiments ( N =404) in the literature. Models are implemented in R, and their source code is also provided.

  12. The Probability Heuristics Model of Syllogistic Reasoning.

    ERIC Educational Resources Information Center

    Chater, Nick; Oaksford, Mike

    1999-01-01

    Proposes a probability heuristic model for syllogistic reasoning and confirms the rationality of this heuristic by an analysis of the probabilistic validity of syllogistic reasoning that treats logical inference as a limiting case of probabilistic inference. Meta-analysis and two experiments involving 40 adult participants and using generalized…

  13. Modeling Array Stations in SIG-VISA

    NASA Astrophysics Data System (ADS)

    Ding, N.; Moore, D.; Russell, S.

    2013-12-01

    We add support for array stations to SIG-VISA, a system for nuclear monitoring using probabilistic inference on seismic signals. Array stations comprise a large portion of the IMS network; they can provide increased sensitivity and more accurate directional information compared to single-component stations. Our existing model assumed that signals were independent at each station, which is false when lots of stations are close together, as in an array. The new model removes that assumption by jointly modeling signals across array elements. This is done by extending our existing Gaussian process (GP) regression models, also known as kriging, from a 3-dimensional single-component space of events to a 6-dimensional space of station-event pairs. For each array and each event attribute (including coda decay, coda height, amplitude transfer and travel time), we model the joint distribution across array elements using a Gaussian process that learns the correlation lengthscale across the array, thereby incorporating information of array stations into the probabilistic inference framework. To evaluate the effectiveness of our model, we perform ';probabilistic beamforming' on new events using our GP model, i.e., we compute the event azimuth having highest posterior probability under the model, conditioned on the signals at array elements. We compare the results from our probabilistic inference model to the beamforming currently performed by IMS station processing.

  14. Comparison of Control Approaches in Genetic Regulatory Networks by Using Stochastic Master Equation Models, Probabilistic Boolean Network Models and Differential Equation Models and Estimated Error Analyzes

    NASA Astrophysics Data System (ADS)

    Caglar, Mehmet Umut; Pal, Ranadip

    2011-03-01

    Central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid''. However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of cell level data and probabilistic - nonlinear nature of interactions. Several models widely used to analyze and simulate these types of nonlinear interactions. Stochastic Master Equation (SME) models give probabilistic nature of the interactions in a detailed manner, with a high calculation cost. On the other hand Probabilistic Boolean Network (PBN) models give a coarse scale picture of the stochastic processes, with a less calculation cost. Differential Equation (DE) models give the time evolution of mean values of processes in a highly cost effective way. The understanding of the relations between the predictions of these models is important to understand the reliability of the simulations of genetic regulatory networks. In this work the success of the mapping between SME, PBN and DE models is analyzed and the accuracy and affectivity of the control policies generated by using PBN and DE models is compared.

  15. Development of probabilistic regional climate scenario in East Asia

    NASA Astrophysics Data System (ADS)

    Dairaku, K.; Ueno, G.; Ishizaki, N. N.

    2015-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in East Asia (CORDEX-EA and Japan), the probability distribution of 2m air temperature was estimated by using developed regression model. The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. Probabilistic climate information in present (1969-1998) and future (2069-2098) climate was developed using CMIP3 SRES A1b scenarios 21 models and the observation data (CRU_TS3.22 & University of Delaware in CORDEX-EA, NIAES AMeDAS mesh data in Japan). The prototype of probabilistic information in CORDEX-EA and Japan represent the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Appropriate combination of statistical methods and optimization of climate ensemble experiments using multi-General Circulation Models (GCMs) and multi-regional climate models (RCMs) ensemble downscaling experiments are investigated.

  16. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  17. a Probabilistic Embedding Clustering Method for Urban Structure Detection

    NASA Astrophysics Data System (ADS)

    Lin, X.; Li, H.; Zhang, Y.; Gao, L.; Zhao, L.; Deng, M.

    2017-09-01

    Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM) to find latent features from high dimensional urban sensing data by "learning" via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China) proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.

  18. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H. W.; Kurth, R. E.

    1991-01-01

    The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.

  19. Three-body system metaphor for the two-slit experiment and Escherichia coli lactose-glucose metabolism.

    PubMed

    Asano, Masanari; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro

    2016-05-28

    We compare the contextual probabilistic structures of the seminal two-slit experiment (quantum interference experiment), the system of three interacting bodies andEscherichia colilactose-glucose metabolism. We show that they have the same non-Kolmogorov probabilistic structure resulting from multi-contextuality. There are plenty of statistical data with non-Kolmogorov features; in particular, the probabilistic behaviour of neither quantum nor biological systems can be described classically. Biological systems (even cells and proteins) are macroscopic systems and one may try to present a more detailed model of interactions in such systems that lead to quantum-like probabilistic behaviour. The system of interactions between three bodies is one of the simplest metaphoric examples for such interactions. By proceeding further in this way (by playing withn-body systems) we shall be able to find metaphoric mechanical models for complex bio-interactions, e.g. signalling between cells, leading to non-Kolmogorov probabilistic data. © 2016 The Author(s).

  20. Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.

    PubMed

    Frommholz, Ingo; Roelleke, Thomas

    2016-01-01

    Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.

  1. Three-body system metaphor for the two-slit experiment and Escherichia coli lactose–glucose metabolism

    PubMed Central

    Asano, Masanari; Ohya, Masanori; Yamato, Ichiro

    2016-01-01

    We compare the contextual probabilistic structures of the seminal two-slit experiment (quantum interference experiment), the system of three interacting bodies and Escherichia coli lactose–glucose metabolism. We show that they have the same non-Kolmogorov probabilistic structure resulting from multi-contextuality. There are plenty of statistical data with non-Kolmogorov features; in particular, the probabilistic behaviour of neither quantum nor biological systems can be described classically. Biological systems (even cells and proteins) are macroscopic systems and one may try to present a more detailed model of interactions in such systems that lead to quantum-like probabilistic behaviour. The system of interactions between three bodies is one of the simplest metaphoric examples for such interactions. By proceeding further in this way (by playing with n-body systems) we shall be able to find metaphoric mechanical models for complex bio-interactions, e.g. signalling between cells, leading to non-Kolmogorov probabilistic data. PMID:27091163

  2. Trait-Dependent Biogeography: (Re)Integrating Biology into Probabilistic Historical Biogeographical Models.

    PubMed

    Sukumaran, Jeet; Knowles, L Lacey

    2018-06-01

    The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Comparison of Four Probabilistic Models (CARES, Calendex, ConsEspo, SHEDS) to Estimate Aggregate Residential Exposures to Pesticides

    EPA Science Inventory

    Two deterministic models (US EPA’s Office of Pesticide Programs Residential Standard Operating Procedures (OPP Residential SOPs) and Draft Protocol for Measuring Children’s Non-Occupational Exposure to Pesticides by all Relevant Pathways (Draft Protocol)) and four probabilistic mo...

  4. EXPERIENCES WITH USING PROBABILISTIC EXPOSURE ANALYSIS METHODS IN THE U.S. EPA

    EPA Science Inventory

    Over the past decade various Offices and Programs within the U.S. EPA have either initiated or increased the development and application of probabilistic exposure analysis models. These models have been applied to a broad range of research or regulatory problems in EPA, such as e...

  5. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  6. Multi-atlas label fusion using hybrid of discriminative and generative classifiers for segmentation of cardiac MR images.

    PubMed

    Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang

    2015-08-01

    Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.

  7. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  8. Structural reliability assessment capability in NESSUS

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.

    1992-01-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  9. Structural reliability assessment capability in NESSUS

    NASA Astrophysics Data System (ADS)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  10. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  11. A novel probabilistic framework for event-based speech recognition

    NASA Astrophysics Data System (ADS)

    Juneja, Amit; Espy-Wilson, Carol

    2003-10-01

    One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.

  12. Suitability of energy cone for probabilistic volcanic hazard assessment: validation tests at Somma-Vesuvius and Campi Flegrei (Italy)

    NASA Astrophysics Data System (ADS)

    Tierz, Pablo; Sandri, Laura; Costa, Antonio; Zaccarelli, Lucia; Di Vito, Mauro Antonio; Sulpizio, Roberto; Marzocchi, Warner

    2016-11-01

    Pyroclastic density currents (PDCs) are gravity-driven hot mixtures of gas and volcanic particles which can propagate at high speed and cover distances up to several tens of kilometers around a given volcano. Therefore, they pose a severe hazard to the surroundings of explosive volcanoes able to produce such phenomena. Despite this threat, probabilistic volcanic hazard assessment (PVHA) of PDCs is still in an early stage of development. PVHA is rooted in the quantification of the large uncertainties (aleatory and epistemic) which characterize volcanic hazard analyses. This quantification typically requires a big dataset of hazard footprints obtained from numerical simulations of the physical process. For PDCs, numerical models range from very sophisticated (not useful for PVHA because of their very long runtimes) to very simple models (criticized because of their highly simplified physics). We present here a systematic and robust validation testing of a simple PDC model, the energy cone (EC), to unravel whether it can be applied to PVHA of PDCs. Using past PDC deposits at Somma-Vesuvius and Campi Flegrei (Italy), we assess the ability of EC to capture the values and variability in some relevant variables for hazard assessment, i.e., area of PDC invasion and maximum runout. In terms of area of invasion, the highest Jaccard coefficients range from 0.33 to 0.86 which indicates an equal or better performance compared to other volcanic mass-flow models. The p values for the observed maximum runouts vary from 0.003 to 0.44. Finally, the frequencies of PDC arrival computed from the EC are similar to those determined from the spatial distribution of past PDC deposits, with high PDC-arrival frequencies over an ˜8-km radius from the crater area at Somma-Vesuvius and around the Astroni crater at Campi Flegrei. The insights derived from our validation tests seem to indicate that the EC is a suitable candidate to compute PVHA of PDCs.

  13. Probabilistic Seismic Risk Model for Western Balkans

    NASA Astrophysics Data System (ADS)

    Stejskal, Vladimir; Lorenzo, Francisco; Pousse, Guillaume; Radovanovic, Slavica; Pekevski, Lazo; Dojcinovski, Dragi; Lokin, Petar; Petronijevic, Mira; Sipka, Vesna

    2010-05-01

    A probabilistic seismic risk model for insurance and reinsurance purposes is presented for an area of Western Balkans, covering former Yugoslavia and Albania. This territory experienced many severe earthquakes during past centuries producing significant damage to many population centres in the region. The highest hazard is related to external Dinarides, namely to the collision zone of the Adriatic plate. The model is based on a unified catalogue for the region and a seismic source model consisting of more than 30 zones covering all the three main structural units - Southern Alps, Dinarides and the south-western margin of the Pannonian Basin. A probabilistic methodology using Monte Carlo simulation was applied to generate the hazard component of the model. Unique set of damage functions based on both loss experience and engineering assessments is used to convert the modelled ground motion severity into the monetary loss.

  14. Probabilistic risk models for multiple disturbances: an example of forest insects and wildfires

    Treesearch

    Haiganoush K. Preisler; Alan A. Ager; Jane L. Hayes

    2010-01-01

    Building probabilistic risk models for highly random forest disturbances like wildfire and forest insect outbreaks is a challenging. Modeling the interactions among natural disturbances is even more difficult. In the case of wildfire and forest insects, we looked at the probability of a large fire given an insect outbreak and also the incidence of insect outbreaks...

  15. Mechanisms of Hierarchical Reinforcement Learning in Corticostriatal Circuits 1: Computational Analysis

    PubMed Central

    Badre, David

    2012-01-01

    Growing evidence suggests that the prefrontal cortex (PFC) is organized hierarchically, with more anterior regions having increasingly abstract representations. How does this organization support hierarchical cognitive control and the rapid discovery of abstract action rules? We present computational models at different levels of description. A neural circuit model simulates interacting corticostriatal circuits organized hierarchically. In each circuit, the basal ganglia gate frontal actions, with some striatal units gating the inputs to PFC and others gating the outputs to influence response selection. Learning at all of these levels is accomplished via dopaminergic reward prediction error signals in each corticostriatal circuit. This functionality allows the system to exhibit conditional if–then hypothesis testing and to learn rapidly in environments with hierarchical structure. We also develop a hybrid Bayesian-reinforcement learning mixture of experts (MoE) model, which can estimate the most likely hypothesis state of individual participants based on their observed sequence of choices and rewards. This model yields accurate probabilistic estimates about which hypotheses are attended by manipulating attentional states in the generative neural model and recovering them with the MoE model. This 2-pronged modeling approach leads to multiple quantitative predictions that are tested with functional magnetic resonance imaging in the companion paper. PMID:21693490

  16. Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.

  17. Simulation investigation of multipactor in metal components for space application with an improved secondary emission model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yun, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Cui, Wan-Zhao, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Wang, Hong-Guang

    2015-05-15

    Effects of the secondary electron emission (SEE) phenomenon of metal surface on the multipactor analysis of microwave components are investigated numerically and experimentally in this paper. Both the secondary electron yield (SEY) and the emitted energy spectrum measurements are performed on silver plated samples for accurate description of the SEE phenomenon. A phenomenological probabilistic model based on SEE physics is utilized and fitted accurately to the measured SEY and emitted energy spectrum of the conditioned surface material of microwave components. Specially, the phenomenological probabilistic model is extended to the low primary energy end lower than 20 eV mathematically, since no accuratemore » measurement data can be obtained. Embedding the phenomenological probabilistic model into the Electromagnetic Particle-In-Cell (EM-PIC) method, the electronic resonant multipacting in microwave components can be tracked and hence the multipactor threshold can be predicted. The threshold prediction error of the transformer and the coaxial filter is 0.12 dB and 1.5 dB, respectively. Simulation results demonstrate that the discharge threshold is strongly dependent on the SEYs and its energy spectrum in the low energy end (lower than 50 eV). Multipacting simulation results agree quite well with experiments in practical components, while the phenomenological probabilistic model fit both the SEY and the emission energy spectrum better than the traditionally used model and distribution. The EM-PIC simulation method with the phenomenological probabilistic model for the surface collision simulation has been demonstrated for predicting the multipactor threshold in metal components for space application.« less

  18. Probabilistic choice models in health-state valuation research: background, theories, assumptions and applications.

    PubMed

    Arons, Alexander M M; Krabbe, Paul F M

    2013-02-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the area of health evaluation. They increasingly use discrete choice models based on random utility theory to derive values for healthcare goods or services. Recent attempts have been made to use discrete choice models as an alternative method to derive values for health states. In this article, various probabilistic choice models are described according to their underlying theory. A historical overview traces their development and applications in diverse fields. The discussion highlights some theoretical and technical aspects of the choice models and their similarity and dissimilarity. The objective of the article is to elucidate the position of each model and their applications for health-state valuation.

  19. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  20. Probabilistic Learning by Rodent Grid Cells

    PubMed Central

    Cheung, Allen

    2016-01-01

    Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population readout of a set of probabilistic spatial computations. PMID:27792723

  1. A Probabilistic Model for Diagnosing Misconceptions by a Pattern Classification Approach.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi K.

    A probabilistic approach is introduced to classify and diagnose erroneous rules of operation resulting from a variety of misconceptions ("bugs") in a procedural domain of arithmetic. The model is contrasted with the deterministic approach which has commonly been used in the field of artificial intelligence, and the advantage of treating the…

  2. Nonlinear probabilistic finite element models of laminated composite shells

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Reddy, J. N.

    1993-01-01

    A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.

  3. A unified probabilistic approach to improve spelling in an event-related potential-based brain-computer interface.

    PubMed

    Kindermans, Pieter-Jan; Verschore, Hannes; Schrauwen, Benjamin

    2013-10-01

    In recent years, in an attempt to maximize performance, machine learning approaches for event-related potential (ERP) spelling have become more and more complex. In this paper, we have taken a step back as we wanted to improve the performance without building an overly complex model, that cannot be used by the community. Our research resulted in a unified probabilistic model for ERP spelling, which is based on only three assumptions and incorporates language information. On top of that, the probabilistic nature of our classifier yields a natural dynamic stopping strategy. Furthermore, our method uses the same parameters across 25 subjects from three different datasets. We show that our classifier, when enhanced with language models and dynamic stopping, improves the spelling speed and accuracy drastically. Additionally, we would like to point out that as our model is entirely probabilistic, it can easily be used as the foundation for complex systems in future work. All our experiments are executed on publicly available datasets to allow for future comparison with similar techniques.

  4. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  5. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    PubMed

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  < 10 -20 ) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., "critical care," "pneumonia," "neurologic evaluation"). Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  6. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets

    PubMed Central

    Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-01-01

    Objective: Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. Materials and Methods: The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Results: Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% (P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). Discussion: Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Conclusion: Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. PMID:27655861

  7. A Guide to the Literature on Learning Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Friedland, Peter (Technical Monitor)

    1994-01-01

    This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.

  8. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  9. Application of the probabilistic approximate analysis method to a turbopump blade analysis. [for Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.

    1990-01-01

    An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.

  10. Probabilistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  11. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    A methodology is presented for the computational simulation of primitive variable uncertainties, and attention is given to the simulation of specific aerospace components. Specific examples treated encompass a probabilistic material behavior model, as well as static, dynamic, and fatigue/damage analyses of a turbine blade in a mistuned bladed rotor in the SSME turbopumps. An account is given of the use of the NESSES probabilistic FEM analysis CFD code.

  12. Probabilistic Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Strack, William C.; Nagpal, Vinod K.

    2010-01-01

    PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.

  13. Psychological Plausibility of the Theory of Probabilistic Mental Models and the Fast and Frugal Heuristics

    ERIC Educational Resources Information Center

    Dougherty, Michael R.; Franco-Watkins, Ana M.; Thomas, Rick

    2008-01-01

    The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbolting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These…

  14. Developing probabilistic models to predict amphibian site occupancy in a patchy landscape

    Treesearch

    R. A. Knapp; K.R. Matthews; H. K. Preisler; R. Jellison

    2003-01-01

    Abstract. Human-caused fragmentation of habitats is threatening an increasing number of animal and plant species, making an understanding of the factors influencing patch occupancy ever more important. The overall goal of the current study was to develop probabilistic models of patch occupancy for the mountain yellow-legged frog (Rana muscosa). This once-common species...

  15. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  16. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    NASA Astrophysics Data System (ADS)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  17. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    NASA Astrophysics Data System (ADS)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  18. Asking better questions: How presentation formats influence information search.

    PubMed

    Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D

    2017-08-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation.

    PubMed

    Pearce, Marcus T

    2018-05-11

    Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  20. A note on probabilistic models over strings: the linear algebra approach.

    PubMed

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  1. A probabilistic framework for microarray data analysis: fundamental probability models and statistical inference.

    PubMed

    Ogunnaike, Babatunde A; Gelmi, Claudio A; Edwards, Jeremy S

    2010-05-21

    Gene expression studies generate large quantities of data with the defining characteristic that the number of genes (whose expression profiles are to be determined) exceed the number of available replicates by several orders of magnitude. Standard spot-by-spot analysis still seeks to extract useful information for each gene on the basis of the number of available replicates, and thus plays to the weakness of microarrays. On the other hand, because of the data volume, treating the entire data set as an ensemble, and developing theoretical distributions for these ensembles provides a framework that plays instead to the strength of microarrays. We present theoretical results that under reasonable assumptions, the distribution of microarray intensities follows the Gamma model, with the biological interpretations of the model parameters emerging naturally. We subsequently establish that for each microarray data set, the fractional intensities can be represented as a mixture of Beta densities, and develop a procedure for using these results to draw statistical inference regarding differential gene expression. We illustrate the results with experimental data from gene expression studies on Deinococcus radiodurans following DNA damage using cDNA microarrays. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  2. Statistical modeling, detection, and segmentation of stains in digitized fabric images

    NASA Astrophysics Data System (ADS)

    Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.

    2007-02-01

    This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.

  3. Integrated presentation of ecological risk from multiple stressors

    PubMed Central

    Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman

    2016-01-01

    Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic. PMID:27782171

  4. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    PubMed Central

    Vesperini, Fabio; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121

  5. An analytical probabilistic model of the quality efficiency of a sewer tank

    NASA Astrophysics Data System (ADS)

    Balistrocchi, Matteo; Grossi, Giovanna; Bacchi, Baldassare

    2009-12-01

    The assessment of the efficiency of a storm water storage facility devoted to the sewer overflow control in urban areas strictly depends on the ability to model the main features of the rainfall-runoff routing process and the related wet weather pollution delivery. In this paper the possibility of applying the analytical probabilistic approach for developing a tank design method, whose potentials are similar to the continuous simulations, is proved. In the model derivation the quality issues of such devices were implemented. The formulation is based on a Weibull probabilistic model of the main characteristics of the rainfall process and on a power law describing the relationship between the dimensionless storm water cumulative runoff volume and the dimensionless cumulative pollutograph. Following this approach, efficiency indexes were established. The proposed model was verified by comparing its results to those obtained by continuous simulations; satisfactory agreement is shown for the proposed efficiency indexes.

  6. PROBABILISTIC MODELING FOR ADVANCED HUMAN EXPOSURE ASSESSMENT

    EPA Science Inventory

    Human exposures to environmental pollutants widely vary depending on the emission patterns that result in microenvironmental pollutant concentrations, as well as behavioral factors that determine the extent of an individual's contact with these pollutants. Probabilistic human exp...

  7. Probabilistic boundary element method

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Raveendra, S. T.

    1989-01-01

    The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.

  8. A probabilistic approach to radiative energy loss calculations for optically thick atmospheres - Hydrogen lines and continua

    NASA Technical Reports Server (NTRS)

    Canfield, R. C.; Ricchiazzi, P. J.

    1980-01-01

    An approximate probabilistic radiative transfer equation and the statistical equilibrium equations are simultaneously solved for a model hydrogen atom consisting of three bound levels and ionization continuum. The transfer equation for L-alpha, L-beta, H-alpha, and the Lyman continuum is explicitly solved assuming complete redistribution. The accuracy of this approach is tested by comparing source functions and radiative loss rates to values obtained with a method that solves the exact transfer equation. Two recent model solar-flare chromospheres are used for this test. It is shown that for the test atmospheres the probabilistic method gives values of the radiative loss rate that are characteristically good to a factor of 2. The advantage of this probabilistic approach is that it retains a description of the dominant physical processes of radiative transfer in the complete redistribution case, yet it achieves a major reduction in computational requirements.

  9. Time Alignment as a Necessary Step in the Analysis of Sleep Probabilistic Curves

    NASA Astrophysics Data System (ADS)

    Rošt'áková, Zuzana; Rosipal, Roman

    2018-02-01

    Sleep can be characterised as a dynamic process that has a finite set of sleep stages during the night. The standard Rechtschaffen and Kales sleep model produces discrete representation of sleep and does not take into account its dynamic structure. In contrast, the continuous sleep representation provided by the probabilistic sleep model accounts for the dynamics of the sleep process. However, analysis of the sleep probabilistic curves is problematic when time misalignment is present. In this study, we highlight the necessity of curve synchronisation before further analysis. Original and in time aligned sleep probabilistic curves were transformed into a finite dimensional vector space, and their ability to predict subjects' age or daily measures is evaluated. We conclude that curve alignment significantly improves the prediction of the daily measures, especially in the case of the S2-related sleep states or slow wave sleep.

  10. Leveraging Past and Current Measurements to Probabilistically Nowcast Low Visibility Procedures at an Airport

    NASA Astrophysics Data System (ADS)

    Mayr, G. J.; Kneringer, P.; Dietz, S. J.; Zeileis, A.

    2016-12-01

    Low visibility or low cloud ceiling reduce the capacity of airports by requiring special low visibility procedures (LVP) for incoming/departing aircraft. Probabilistic forecasts when such procedures will become necessary help to mitigate delays and economic losses.We compare the performance of probabilistic nowcasts with two statistical methods: ordered logistic regression, and trees and random forests. These models harness historic and current meteorological measurements in the vicinity of the airport and LVP states, and incorporate diurnal and seasonal climatological information via generalized additive models (GAM). The methods are applied at Vienna International Airport (Austria). The performance is benchmarked against climatology, persistence and human forecasters.

  11. Composite Load Spectra for Select Space Propulsion Structural Components

    NASA Technical Reports Server (NTRS)

    Ho, Hing W.; Newell, James F.

    1994-01-01

    Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.

  12. Probabilistic modeling of the evolution of gene synteny within reconciled phylogenies

    PubMed Central

    2015-01-01

    Background Most models of genome evolution concern either genetic sequences, gene content or gene order. They sometimes integrate two of the three levels, but rarely the three of them. Probabilistic models of gene order evolution usually have to assume constant gene content or adopt a presence/absence coding of gene neighborhoods which is blind to complex events modifying gene content. Results We propose a probabilistic evolutionary model for gene neighborhoods, allowing genes to be inserted, duplicated or lost. It uses reconciled phylogenies, which integrate sequence and gene content evolution. We are then able to optimize parameters such as phylogeny branch lengths, or probabilistic laws depicting the diversity of susceptibility of syntenic regions to rearrangements. We reconstruct a structure for ancestral genomes by optimizing a likelihood, keeping track of all evolutionary events at the level of gene content and gene synteny. Ancestral syntenies are associated with a probability of presence. We implemented the model with the restriction that at most one gene duplication separates two gene speciations in reconciled gene trees. We reconstruct ancestral syntenies on a set of 12 drosophila genomes, and compare the evolutionary rates along the branches and along the sites. We compare with a parsimony method and find a significant number of results not supported by the posterior probability. The model is implemented in the Bio++ library. It thus benefits from and enriches the classical models and methods for molecular evolution. PMID:26452018

  13. A PROBABILISTIC EXPOSURE ASSESSMENT FOR CHILDREN WHO CONTACT CCA-TREATED PLAYSETS AND DECKS USING THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION (SHEDS) MODEL FOR THE WOOD PRESERVATIVE EXPOSURE SCENARIO

    EPA Science Inventory

    The U.S. Environmental Protection Agency has conducted a probabilistic exposure and dose assessment on the arsenic (As) and chromium (Cr) components of Chromated Copper Arsenate (CCA) using the Stochastic Human Exposure and Dose Simulation model for wood preservatives (SHEDS-Wood...

  14. Probabilistic Based Modeling and Simulation Assessment

    DTIC Science & Technology

    2010-06-01

    different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the probability of injury...variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment impact, and...first area of focus is to develop a methodology to integrate probabilistic analysis into finite element analysis of vehicle collisions and blast . The

  15. A Probabilistic Model for Students' Errors and Misconceptions on the Structure of Matter in Relation to Three Cognitive Variables

    ERIC Educational Resources Information Center

    Tsitsipis, Georgios; Stamovlasis, Dimitrios; Papageorgiou, George

    2012-01-01

    In this study, the effect of 3 cognitive variables such as logical thinking, field dependence/field independence, and convergent/divergent thinking on some specific students' answers related to the particulate nature of matter was investigated by means of probabilistic models. Besides recording and tabulating the students' responses, a combination…

  16. Probabilistic commodity-flow-based focusing of monitoring activities to facilitate early detection of Phytophthora ramorum outbreaks

    Treesearch

    Steven C. McKelvey; William D. Smith; Frank Koch

    2012-01-01

    This project summary describes a probabilistic model developed with funding support from the Forest Health Monitoring Program of the Forest Service, U.S. Department of Agriculture (BaseEM Project SO-R-08-01). The model has been implemented in SODBuster, a standalone software package developed using the Java software development kit from Sun Microsystems.

  17. Economic Dispatch for Microgrid Containing Electric Vehicles via Probabilistic Modeling: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Yin; Gao, Wenzhong; Momoh, James

    In this paper, an economic dispatch model with probabilistic modeling is developed for a microgrid. The electric power supply in a microgrid consists of conventional power plants and renewable energy power plants, such as wind and solar power plants. Because of the fluctuation in the output of solar and wind power plants, an empirical probabilistic model is developed to predict their hourly output. According to different characteristics of wind and solar power plants, the parameters for probabilistic distribution are further adjusted individually for both. On the other hand, with the growing trend in plug-in electric vehicles (PHEVs), an integrated microgridmore » system must also consider the impact of PHEVs. The charging loads from PHEVs as well as the discharging output via the vehicle-to-grid (V2G) method can greatly affect the economic dispatch for all of the micro energy sources in a microgrid. This paper presents an optimization method for economic dispatch in a microgrid considering conventional power plants, renewable power plants, and PHEVs. The simulation results reveal that PHEVs with V2G capability can be an indispensable supplement in a modern microgrid.« less

  18. Probabilistic population projections with migration uncertainty

    PubMed Central

    Azose, Jonathan J.; Ševčíková, Hana; Raftery, Adrian E.

    2016-01-01

    We produce probabilistic projections of population for all countries based on probabilistic projections of fertility, mortality, and migration. We compare our projections to those from the United Nations’ Probabilistic Population Projections, which uses similar methods for fertility and mortality but deterministic migration projections. We find that uncertainty in migration projection is a substantial contributor to uncertainty in population projections for many countries. Prediction intervals for the populations of Northern America and Europe are over 70% wider, whereas prediction intervals for the populations of Africa, Asia, and the world as a whole are nearly unchanged. Out-of-sample validation shows that the model is reasonably well calibrated. PMID:27217571

  19. Probabilistic brains: knowns and unknowns

    PubMed Central

    Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E

    2015-01-01

    There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561

  20. A probabilistic maintenance model for diesel engines

    NASA Astrophysics Data System (ADS)

    Pathirana, Shan; Abeygunawardane, Saranga Kumudu

    2018-02-01

    In this paper, a probabilistic maintenance model is developed for inspection based preventive maintenance of diesel engines based on the practical model concepts discussed in the literature. Developed model is solved using real data obtained from inspection and maintenance histories of diesel engines and experts' views. Reliability indices and costs were calculated for the present maintenance policy of diesel engines. A sensitivity analysis is conducted to observe the effect of inspection based preventive maintenance on the life cycle cost of diesel engines.

  1. Probabilistic estimates of drought impacts on agricultural production

    NASA Astrophysics Data System (ADS)

    Madadgar, Shahrbanou; AghaKouchak, Amir; Farahmand, Alireza; Davis, Steven J.

    2017-08-01

    Increases in the severity and frequency of drought in a warming climate may negatively impact agricultural production and food security. Unlike previous studies that have estimated agricultural impacts of climate condition using single-crop yield distributions, we develop a multivariate probabilistic model that uses projected climatic conditions (e.g., precipitation amount or soil moisture) throughout a growing season to estimate the probability distribution of crop yields. We demonstrate the model by an analysis of the historical period 1980-2012, including the Millennium Drought in Australia (2001-2009). We find that precipitation and soil moisture deficit in dry growing seasons reduced the average annual yield of the five largest crops in Australia (wheat, broad beans, canola, lupine, and barley) by 25-45% relative to the wet growing seasons. Our model can thus produce region- and crop-specific agricultural sensitivities to climate conditions and variability. Probabilistic estimates of yield may help decision-makers in government and business to quantitatively assess the vulnerability of agriculture to climate variations. We develop a multivariate probabilistic model that uses precipitation to estimate the probability distribution of crop yields. The proposed model shows how the probability distribution of crop yield changes in response to droughts. During Australia's Millennium Drought precipitation and soil moisture deficit reduced the average annual yield of the five largest crops.

  2. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  3. Probabilistic image modeling with an extended chain graph for human activity recognition and image segmentation.

    PubMed

    Zhang, Lei; Zeng, Zhi; Ji, Qiang

    2011-09-01

    Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.

  4. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  5. Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foye, Kevin C.; Soong, Te-Yang

    2012-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the wastemore » mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this specific example, relative density, which can be determined through field measurements, was selected as the field quality control parameter for waste placement. This technique can be extended to include a rigorous performance-based methodology using other parameters (void space criteria, debris-soil mix ratio, pre-loading, etc.). As shown in this example, each parameter range, or sets of parameter ranges can be selected such that they can result in an acceptable, long-term differential settlement according to the probabilistic model. The methodology can also be used to re-evaluate the long-term differential settlement behavior at closed land disposal facilities to identify, if any, problematic facilities so that remedial action (e.g., reinforcement of upper and intermediate waste layers) can be implemented. Considering the inherent spatial variability in waste and earth materials and the need for engineers to apply sound quantitative practices to engineering analysis, it is important to apply the available probabilistic techniques to problems of differential settlement. One such method to implement probability-based differential settlement analyses for the design of landfill final covers has been presented. The design evaluation technique presented is one tool to bridge the gap from deterministic practice to probabilistic practice. (authors)« less

  6. Probabilistic Structural Analysis Program

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  7. What is the Value Added to Adaptation Planning by Probabilistic Projections of Climate Change?

    NASA Astrophysics Data System (ADS)

    Wilby, R. L.

    2008-12-01

    Probabilistic projections of climate change offer new sources of risk information to support regional impacts assessment and adaptation options appraisal. However, questions continue to surround how best to apply these scenarios in a practical context, and whether the added complexity and computational burden leads to more robust decision-making. This paper provides an overview of recent efforts in the UK to 'bench-test' frameworks for employing probabilistic projections ahead of the release of the next generation, UKCIP08 projections (in November 2008). This is involving close collaboration between government agencies, research and stakeholder communities. Three examples will be cited to illustrate how probabilistic projections are already informing decisions about future flood risk management in London, water resource planning in trial river basins, and assessments of risks from rising water temperatures to Atlantic salmon stocks in southern England. When compared with conventional deterministic scenarios, ensemble projections allow exploration of a wider range of management options and highlight timescales for implementing adaptation measures. Users of probabilistic scenarios must keep in mind that other uncertainties (e.g., due to impacts model structure and parameterisation) should be handled in an equally rigorous way to those arising from climate models and emission scenarios. Finally, it is noted that a commitment to long-term monitoring is also critical for tracking environmental change, testing model projections, and for evaluating the success (or not) of any scenario-led interventions.

  8. Environmental probabilistic quantitative assessment methodologies

    USGS Publications Warehouse

    Crovelli, R.A.

    1995-01-01

    In this paper, four petroleum resource assessment methodologies are presented as possible pollution assessment methodologies, even though petroleum as a resource is desirable, whereas pollution is undesirable. A methodology is defined in this paper to consist of a probability model and a probabilistic method, where the method is used to solve the model. The following four basic types of probability models are considered: 1) direct assessment, 2) accumulation size, 3) volumetric yield, and 4) reservoir engineering. Three of the four petroleum resource assessment methodologies were written as microcomputer systems, viz. TRIAGG for direct assessment, APRAS for accumulation size, and FASPU for reservoir engineering. A fourth microcomputer system termed PROBDIST supports the three assessment systems. The three assessment systems have different probability models but the same type of probabilistic method. The type of advantages of the analytic method are in computational speed and flexibility, making it ideal for a microcomputer. -from Author

  9. Classification of Company Performance using Weighted Probabilistic Neural Network

    NASA Astrophysics Data System (ADS)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  10. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    NASA Astrophysics Data System (ADS)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  11. Is there a basis for preferring characteristic earthquakes over a Gutenberg–Richter distribution in probabilistic earthquake forecasting?

    USGS Publications Warehouse

    Parsons, Thomas E.; Geist, Eric L.

    2009-01-01

    The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.

  12. Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations

    NASA Astrophysics Data System (ADS)

    Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.

    2014-02-01

    The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.

  13. Fatigue crack growth model RANDOM2 user manual. Appendix 1: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN program RANDOM2 is presented in the form of a user's manual. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Details of the theoretical background, input data instructions, and a sample problem illustrating the use of the program are included.

  14. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  15. Bayesian Probabilistic Projection of International Migration.

    PubMed

    Azose, Jonathan J; Raftery, Adrian E

    2015-10-01

    We propose a method for obtaining joint probabilistic projections of migration for all countries, broken down by age and sex. Joint trajectories for all countries are constrained to satisfy the requirement of zero global net migration. We evaluate our model using out-of-sample validation and compare point projections to the projected migration rates from a persistence model similar to the method used in the United Nations' World Population Prospects, and also to a state-of-the-art gravity model.

  16. Probabilistic Estimates of Global Mean Sea Level and its Underlying Processes

    NASA Astrophysics Data System (ADS)

    Hay, C.; Morrow, E.; Kopp, R. E.; Mitrovica, J. X.

    2015-12-01

    Local sea level can vary significantly from the global mean value due to a suite of processes that includes ongoing sea-level changes due to the last ice age, land water storage, ocean circulation changes, and non-uniform sea-level changes that arise when modern-day land ice rapidly melts. Understanding these sources of spatial and temporal variability is critical to estimating past and present sea-level change and projecting future sea-level rise. Using two probabilistic techniques, a multi-model Kalman smoother and Gaussian process regression, we have reanalyzed 20th century tide gauge observations to produce a new estimate of global mean sea level (GMSL). Our methods allow us to extract global information from the sparse tide gauge field by taking advantage of the physics-based and model-derived geometry of the contributing processes. Both methods provide constraints on the sea-level contribution of glacial isostatic adjustment (GIA). The Kalman smoother tests multiple discrete models of glacial isostatic adjustment (GIA), probabilistically computing the most likely GIA model given the observations, while the Gaussian process regression characterizes the prior covariance structure of a suite of GIA models and then uses this structure to estimate the posterior distribution of local rates of GIA-induced sea-level change. We present the two methodologies, the model-derived geometries of the underlying processes, and our new probabilistic estimates of GMSL and GIA.

  17. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  18. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  19. The application of probabilistic design theory to high temperature low cycle fatigue

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.

    1981-01-01

    Metal fatigue under stress and thermal cycling is a principal mode of failure in gas turbine engine hot section components such as turbine blades and disks and combustor liners. Designing for fatigue is subject to considerable uncertainty, e.g., scatter in cycles to failure, available fatigue test data and operating environment data, uncertainties in the models used to predict stresses, etc. Methods of analyzing fatigue test data for probabilistic design purposes are summarized. The general strain life as well as homo- and hetero-scedastic models are considered. Modern probabilistic design theory is reviewed and examples are presented which illustrate application to reliability analysis of gas turbine engine components.

  20. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie Corinne Scheidt

    1994-01-01

    This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  1. Protocol and Demonstrations of Probabilistic Reliability Assessment for Structural Health Monitoring Systems (Preprint)

    DTIC Science & Technology

    2011-11-01

    assessment to quality of localization/characterization estimates. This protocol includes four critical components: (1) a procedure to identify the...critical factors impacting SHM system performance; (2) a multistage or hierarchical approach to SHM system validation; (3) a model -assisted evaluation...Lindgren, E. A ., Buynak, C. F., Steffes, G., Derriso, M., “ Model -assisted Probabilistic Reliability Assessment for Structural Health Monitoring

  2. Probabilistic Model Development

    NASA Technical Reports Server (NTRS)

    Adam, James H., Jr.

    2010-01-01

    Objective: Develop a Probabilistic Model for the Solar Energetic Particle Environment. Develop a tool to provide a reference solar particle radiation environment that: 1) Will not be exceeded at a user-specified confidence level; 2) Will provide reference environments for: a) Peak flux; b) Event-integrated fluence; and c) Mission-integrated fluence. The reference environments will consist of: a) Elemental energy spectra; b) For protons, helium and heavier ions.

  3. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  4. Calculation model of the scattering polarization coherency matrix for a detection system of oil spills at sea.

    PubMed

    Xu, Jiang; Qian, Weixian; Chen, Qian; Zhou, Yang

    2018-02-10

    As a new analytical method to identify oil spills at sea, the main effect of a polarization measurement system is the scattering polarization information of different measured parts. To improve measurement accuracy, the scattering polarization characteristics of oil film and seawater were observed in this paper. A useful computational model, the scattering polarization coherency matrix (SPCM), was derived, which is a probabilistic mixture of the polarization coherency matrix. Combined with the Fresnel formula, the amplitude ratio and phase retardation were extracted to verify the scientific nature of the physical model. Experiments were performed, and the SPCM of the oil film and seawater were measured. In order to test the practicability of the model, we derived the degree of polarization from the SPCM and used it as the basis for identification of the actual oil spill at sea in the case of sunlight. Research indicated that the path of multiple scattering was in connection with the molecular structure and interactions of the medium. Under different measuring angles, the SPCM of the oil film and seawater have both differences and regularities; the experimental results indicate that it can be used for the rapid detection of an oil spill at sea, and the data are accurate and reliable.

  5. Foehn at the lowest place on earth

    NASA Astrophysics Data System (ADS)

    Mayr, Georg; Metzger, Jutta; Mayr, Raphael

    2017-04-01

    Foehn occurs at the Dead Sea. Measurements from weather stations at the valley floor and on the slope show that the prime season for foehn is summer and the prime time late afternoon and evening (using the objective classification algorithm of Plavcan et al (2014)[1]). During summer synoptic scale forcing with cross-barrier winds is rare and thus the gravity-wave-driven concept cannot be used to explain the occurrence of foehn. The density-driven foehn concept [2], on the other hand, with denser air at crest level upstream than in the valley can explain the occurrence of foehn. It also explains the differences in foehn frequency between the slope and valley bottom station. References: [1] Plavcan, D., Mayr, G. J., & Zeileis, A. (2014). Automatic and probabilistic foehn diagnosis with a statistical mixture model. Journal of Applied Meteorology and Climatology, 53(3), 652-659. [2] Mayr, G. J., & Armi, L. (2010). The influence of downstream diurnal heating on the descent of flow across the Sierras. Journal of Applied Meteorology and Climatology, 49(9), 1906-1912.

  6. Extended applications of track irregularity probabilistic model and vehicle-slab track coupled model on dynamics of railway systems

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Zhai, Wanming; Gao, Jianmin

    2017-11-01

    Track irregularities are inevitably in a process of stochastic evolution due to the uncertainty and continuity of wheel-rail interactions. For depicting the dynamic behaviours of vehicle-track coupling system caused by track random irregularities thoroughly, it is a necessity to develop a track irregularity probabilistic model to simulate rail surface irregularities with ergodic properties on amplitudes, wavelengths and probabilities, and to build a three-dimensional vehicle-track coupled model by properly considering the wheel-rail nonlinear contact mechanisms. In the present study, the vehicle-track coupled model is programmed by combining finite element method with wheel-rail coupling model firstly. Then, in light of the capability of power spectral density (PSD) in characterising amplitudes and wavelengths of stationary random signals, a track irregularity probabilistic model is presented to reveal and simulate the whole characteristics of track irregularity PSD. Finally, extended applications from three aspects, that is, extreme analysis, reliability analysis and response relationships between dynamic indices, are conducted to the evaluation and application of the proposed models.

  7. Testing for ontological errors in probabilistic forecasting models of natural systems

    PubMed Central

    Marzocchi, Warner; Jordan, Thomas H.

    2014-01-01

    Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265

  8. Evaluation of Lithofacies Up-Scaling Methods for Probabilistic Prediction of Carbon Dioxide Behavior

    NASA Astrophysics Data System (ADS)

    Park, J. Y.; Lee, S.; Lee, Y. I.; Kihm, J. H.; Kim, J. M.

    2017-12-01

    Behavior of carbon dioxide injected into target reservoir (storage) formations is highly dependent on heterogeneities of geologic lithofacies and properties. These heterogeneous lithofacies and properties basically have probabilistic characteristics. Thus, their probabilistic evaluation has to be implemented properly into predicting behavior of injected carbon dioxide in heterogeneous storage formations. In this study, a series of three-dimensional geologic modeling is performed first using SKUA-GOCAD (ASGA and Paradigm) to establish lithofacies models of the Janggi Conglomerate in the Janggi Basin, Korea within a modeling domain. The Janggi Conglomerate is composed of mudstone, sandstone, and conglomerate, and it has been identified as a potential reservoir rock (clastic saline formation) for geologic carbon dioxide storage. Its lithofacies information are obtained from four boreholes and used in lithofacies modeling. Three different up-scaling methods (i.e., nearest to cell center, largest proportion, and random) are applied, and lithofacies modeling is performed 100 times for each up-scaling method. The lithofacies models are then compared and analyzed with the borehole data to evaluate the relative suitability of the three up-scaling methods. Finally, the lithofacies models are converted into coarser lithofacies models within the same modeling domain with larger grid blocks using the three up-scaling methods, and a series of multiphase thermo-hydrological numerical simulation is performed using TOUGH2-MP (Zhang et al., 2008) to predict probabilistically behavior of injected carbon dioxide. The coarser lithofacies models are also compared and analyzed with the borehole data and finer lithofacies models to evaluate the relative suitability of the three up-scaling methods. Three-dimensional geologic modeling, up-scaling, and multiphase thermo-hydrological numerical simulation as linked methodologies presented in this study can be utilized as a practical probabilistic evaluation tool to predict behavior of injected carbon dioxide and even to analyze its leakage risk. This work was supported by the Korea CCS 2020 Project of the Korea Carbon Capture and Sequestration R&D Center (KCRC) funded by the National Research Foundation (NRF), Ministry of Science and ICT (MSIT), Korea.

  9. Mode identification using stochastic hybrid models with applications to conflict detection and resolution

    NASA Astrophysics Data System (ADS)

    Naseri Kouzehgarani, Asal

    2009-12-01

    Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.

  10. A probabilistic assessment of health risks associated with short-term exposure to tropospheric ozone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitfield, R.G; Biller, W.F.; Jusko, M.J.

    1996-06-01

    The work described in this report is part of a larger risk assessment sponsored by the U.S. Environmental Protection Agency. Earlier efforts developed exposure-response relationships for acute health effects among populations engaged in heavy exertion. Those efforts also developed a probabilistic national ambient air quality standards exposure model and a general methodology for integrating probabilistic exposure-response relation- ships and exposure estimates to calculate overall risk results. Recently published data make it possible to model additional health endpoints (for exposure at moderate exertion), including hospital admissions. New air quality and exposure estimates for alternative national ambient air quality standards for ozonemore » are combined with exposure-response models to produce the risk results for hospital admissions and acute health effects. Sample results explain the methodology and introduce risk output formats.« less

  11. Probabilistic failure assessment with application to solid rocket motors

    NASA Technical Reports Server (NTRS)

    Jan, Darrell L.; Davidson, Barry D.; Moore, Nicholas R.

    1990-01-01

    A quantitative methodology is being developed for assessment of risk of failure of solid rocket motors. This probabilistic methodology employs best available engineering models and available information in a stochastic framework. The framework accounts for incomplete knowledge of governing parameters, intrinsic variability, and failure model specification error. Earlier case studies have been conducted on several failure modes of the Space Shuttle Main Engine. Work in progress on application of this probabilistic approach to large solid rocket boosters such as the Advanced Solid Rocket Motor for the Space Shuttle is described. Failure due to debonding has been selected as the first case study for large solid rocket motors (SRMs) since it accounts for a significant number of historical SRM failures. Impact of incomplete knowledge of governing parameters and failure model specification errors is expected to be important.

  12. Statistical uncertainty of extreme wind storms over Europe derived from a probabilistic clustering technique

    NASA Astrophysics Data System (ADS)

    Walz, Michael; Leckebusch, Gregor C.

    2016-04-01

    Extratropical wind storms pose one of the most dangerous and loss intensive natural hazards for Europe. However, due to only 50 years of high quality observational data, it is difficult to assess the statistical uncertainty of these sparse events just based on observations. Over the last decade seasonal ensemble forecasts have become indispensable in quantifying the uncertainty of weather prediction on seasonal timescales. In this study seasonal forecasts are used in a climatological context: By making use of the up to 51 ensemble members, a broad and physically consistent statistical base can be created. This base can then be used to assess the statistical uncertainty of extreme wind storm occurrence more accurately. In order to determine the statistical uncertainty of storms with different paths of progression, a probabilistic clustering approach using regression mixture models is used to objectively assign storm tracks (either based on core pressure or on extreme wind speeds) to different clusters. The advantage of this technique is that the entire lifetime of a storm is considered for the clustering algorithm. Quadratic curves are found to describe the storm tracks most accurately. Three main clusters (diagonal, horizontal or vertical progression of the storm track) can be identified, each of which have their own particulate features. Basic storm features like average velocity and duration are calculated and compared for each cluster. The main benefit of this clustering technique, however, is to evaluate if the clusters show different degrees of uncertainty, e.g. more (less) spread for tracks approaching Europe horizontally (diagonally). This statistical uncertainty is compared for different seasonal forecast products.

  13. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  14. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  15. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Incorporating seismic phase correlations into a probabilistic model of global-scale seismology

    NASA Astrophysics Data System (ADS)

    Arora, Nimar

    2013-04-01

    We present a probabilistic model of seismic phases whereby the attributes of the body-wave phases are correlated to those of the first arriving P phase. This model has been incorporated into NET-VISA (Network processing Vertically Integrated Seismic Analysis) a probabilistic generative model of seismic events, their transmission, and detection on a global seismic network. In the earlier version of NET-VISA, seismic phase were assumed to be independent of each other. Although this didn't affect the quality of the inferred seismic bulletin, for the most part, it did result in a few instances of anomalous phase association. For example, an S phase with a smaller slowness than the corresponding P phase. We demonstrate that the phase attributes are indeed highly correlated, for example the uncertainty in the S phase travel time is significantly reduced given the P phase travel time. Our new model exploits these correlations to produce better calibrated probabilities for the events, as well as fewer anomalous associations.

  17. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, High-Cycle and Low-Cycle Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie C.; Boyce, Lola

    1995-01-01

    The development of methodology for a probabilistic material strength degradation is described. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing predictions of high-cycle mechanical fatigue and high temperature effects with experiments are presented. Results from this limited verification study strongly supported that material degradation can be represented by randomized multifactor interaction models.

  18. A Practical Probabilistic Graphical Modeling Tool for Weighing Ecological Risk-Based Evidence

    EPA Science Inventory

    Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for e...

  19. Probabilistic properties of the date of maximum river flow, an approach based on circular statistics in lowland, highland and mountainous catchment

    NASA Astrophysics Data System (ADS)

    Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz

    2018-04-01

    Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.

  20. The cerebellum and decision making under uncertainty.

    PubMed

    Blackwood, Nigel; Ffytche, Dominic; Simmons, Andrew; Bentall, Richard; Murray, Robin; Howard, Robert

    2004-06-01

    This study aimed to identify the neural basis of probabilistic reasoning, a type of inductive inference that aids decision making under conditions of uncertainty. Eight normal subjects performed two separate two-alternative-choice tasks (the balls in a bottle and personality survey tasks) while undergoing functional magnetic resonance imaging (fMRI). The experimental conditions within each task were chosen so that they differed only in their requirement to make a decision under conditions of uncertainty (probabilistic reasoning and frequency determination required) or under conditions of certainty (frequency determination required). The same visual stimuli and motor responses were used in the experimental conditions. We provide evidence that the neo-cerebellum, in conjunction with the premotor cortex, inferior parietal lobule and medial occipital cortex, mediates the probabilistic inferences that guide decision making under uncertainty. We hypothesise that the neo-cerebellum constructs internal working models of uncertain events in the external world, and that such probabilistic models subserve the predictive capacity central to induction. Copyright 2004 Elsevier B.V.

  1. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework.

    PubMed

    Briggs, Andrew H; Ades, A E; Price, Martin J

    2003-01-01

    In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.

  2. Probabilistic wind/tornado/missile analyses for hazard and fragility evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Y.J.; Reich, M.

    Detailed analysis procedures and examples are presented for the probabilistic evaluation of hazard and fragility against high wind, tornado, and tornado-generated missiles. In the tornado hazard analysis, existing risk models are modified to incorporate various uncertainties including modeling errors. A significant feature of this paper is the detailed description of the Monte-Carlo simulation analyses of tornado-generated missiles. A simulation procedure, which includes the wind field modeling, missile injection, solution of flight equations, and missile impact analysis, is described with application examples.

  3. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  4. Confronting uncertainty in flood damage predictions

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno

    2015-04-01

    Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  5. Probabilistic Modeling of the Renal Stone Formation Module

    NASA Technical Reports Server (NTRS)

    Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.

    2013-01-01

    The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.

  6. Automated Text Data Mining Analysis of Five Decades of Educational Leadership Research Literature: Probabilistic Topic Modeling of "EAQ" Articles From 1965 to 2014

    ERIC Educational Resources Information Center

    Wang, Yinying; Bowers, Alex J.; Fikis, David J.

    2017-01-01

    Purpose: The purpose of this study is to describe the underlying topics and the topic evolution in the 50-year history of educational leadership research literature. Method: We used automated text data mining with probabilistic latent topic models to examine the full text of the entire publication history of all 1,539 articles published in…

  7. Method and system for dynamic probabilistic risk assessment

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)

    2013-01-01

    The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.

  8. Probabilistic Climate Scenario Information for Risk Assessment

    NASA Astrophysics Data System (ADS)

    Dairaku, K.; Ueno, G.; Takayabu, I.

    2014-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in Japan, we compared the physics ensemble experiments using the 60km global atmospheric model of the Meteorological Research Institute (MRI-AGCM) with multi-model ensemble experiments with global atmospheric-ocean coupled models (CMIP3) of SRES A1b scenario experiments. The MRI-AGCM shows relatively good skills particularly in tropics for temperature and geopotential height. Variability in surface air temperature of physical ensemble experiments with MRI-AGCM was within the range of one standard deviation of the CMIP3 model in the Asia region. On the other hand, the variability of precipitation was relatively well represented compared with the variation of the CMIP3 models. Models which show the similar reproducibility in the present climate shows different future climate change. We couldn't find clear relationships between present climate and future climate change in temperature and precipitation. We develop a new method to produce probabilistic information of climate change scenarios by weighting model ensemble experiments based on a regression model (Krishnamurti et al., Science, 1999). The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. The prototype of probabilistic information in Japan represents the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Acknowledgments: This study was supported by the SOUSEI Program, funded by Ministry of Education, Culture, Sports, Science and Technology, Government of Japan.

  9. Near Real-Time Probabilistic Damage Diagnosis Using Surrogate Modeling and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Zubair, Mohammad; Ranjan, Desh

    2017-01-01

    This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.

  10. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  11. Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.

    PubMed

    Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W

    2016-08-18

    A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (<7 %) between estimation and ground truth. By applying the topic model-based purification to mass spectrometric data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed that incorporation of scan-level features have the potential to lead to more accurate purification results by alleviating the loss in information as a result of integrating peaks. We believe cancer biomarker discovery studies that use mass spectrometric analysis of human biospecimens can greatly benefit from topic model-based purification of the data prior to statistical and pathway analyses.

  12. Encoding probabilistic brain atlases using Bayesian inference.

    PubMed

    Van Leemput, Koen

    2009-06-01

    This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.

  13. An Instructional Module on Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.

    2017-01-01

    Mokken scale analysis (MSA) is a probabilistic-nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore…

  14. Lossed in translation: an off-the-shelf method to recover probabilistic beliefs from loss-averse agents.

    PubMed

    Offerman, Theo; Palley, Asa B

    2016-01-01

    Strictly proper scoring rules are designed to truthfully elicit subjective probabilistic beliefs from risk neutral agents. Previous experimental studies have identified two problems with this method: (i) risk aversion causes agents to bias their reports toward the probability of [Formula: see text], and (ii) for moderate beliefs agents simply report [Formula: see text]. Applying a prospect theory model of risk preferences, we show that loss aversion can explain both of these behavioral phenomena. Using the insights of this model, we develop a simple off-the-shelf probability assessment mechanism that encourages loss-averse agents to report true beliefs. In an experiment, we demonstrate the effectiveness of this modification in both eliminating uninformative reports and eliciting true probabilistic beliefs.

  15. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multifactor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  16. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multi-factor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  17. High-Resolution Underwater Mapping Using Side-Scan Sonar

    PubMed Central

    2016-01-01

    The goal of this study is to generate high-resolution sea floor maps using a Side-Scan Sonar(SSS). This is achieved by explicitly taking into account the SSS operation as follows. First, the raw sensor data is corrected by means of a physics-based SSS model. Second, the data is projected to the sea-floor. The errors involved in this projection are thoroughfully analysed. Third, a probabilistic SSS model is defined and used to estimate the probability of each sea-floor region to be observed. This probabilistic information is then used to weight the contribution of each SSS measurement to the map. Because of these models, arbitrary map resolutions can be achieved, even beyond the sensor resolution. Finally, a geometric map building method is presented and combined with the probabilistic approach. The resulting map is composed of two layers. The echo intensity layer holds the most likely echo intensities at each point in the sea-floor. The probabilistic layer contains information about how confident can the user or the higher control layers be about the echo intensity layer data. Experimental results have been conducted in a large subsea region. PMID:26821379

  18. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    PubMed

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.

  19. MrLavaLoba: A new probabilistic model for the simulation of lava flows as a settling process

    NASA Astrophysics Data System (ADS)

    de'Michieli Vitturi, Mattia; Tarquini, Simone

    2018-01-01

    A new code to simulate lava flow spread, MrLavaLoba, is presented. In the code, erupted lava is itemized in parcels having an elliptical shape and prescribed volume. New parcels bud from existing ones according to a probabilistic law influenced by the local steepest slope direction and by tunable input settings. MrLavaLoba must be accounted among the probabilistic codes for the simulation of lava flows, because it is not intended to mimic the actual process of flowing or to provide directly the progression with time of the flow field, but rather to guess the most probable inundated area and final thickness of the lava deposit. The code's flexibility allows it to produce variable lava flow spread and emplacement according to different dynamics (e.g. pahoehoe or channelized-'a'ā). For a given scenario, it is shown that model outputs converge, in probabilistic terms, towards a single solution. The code is applied to real cases in Hawaii and Mt. Etna, and the obtained maps are shown. The model is written in Python and the source code is available at http://demichie.github.io/MrLavaLoba/.

  20. Probabilistic distributions of pinhole defects in atomic layer deposited films on polymeric substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yersak, Alexander S., E-mail: alexander.yersak@colorado.edu; Lee, Yung-Cheng

    Pinhole defects in atomic layer deposition (ALD) coatings were measured in an area of 30 cm{sup 2} in an ALD reactor, and these defects were represented by a probabilistic cluster model instead of a single defect density value with number of defects over area. With the probabilistic cluster model, the pinhole defects were simulated over a manufacturing scale surface area of ∼1 m{sup 2}. Large-area pinhole defect simulations were used to develop an improved and enhanced design method for ALD-based devices. A flexible thermal ground plane (FTGP) device requiring ALD hermetic coatings was used as an example. Using a single defectmore » density value, it was determined that for an application with operation temperatures higher than 60 °C, the FTGP device would not be possible. The new probabilistic cluster model shows that up to 40.3% of the FTGP would be acceptable. With this new approach the manufacturing yield of ALD-enabled or other thin film based devices with different design configurations can be determined. It is important to guide process optimization and control and design for manufacturability.« less

  1. Bayesian Probabilistic Projections of Life Expectancy for All Countries

    PubMed Central

    Raftery, Adrian E.; Chunn, Jennifer L.; Gerland, Patrick; Ševčíková, Hana

    2014-01-01

    We propose a Bayesian hierarchical model for producing probabilistic forecasts of male period life expectancy at birth for all the countries of the world from the present to 2100. Such forecasts would be an input to the production of probabilistic population projections for all countries, which is currently being considered by the United Nations. To evaluate the method, we did an out-of-sample cross-validation experiment, fitting the model to the data from 1950–1995, and using the estimated model to forecast for the subsequent ten years. The ten-year predictions had a mean absolute error of about 1 year, about 40% less than the current UN methodology. The probabilistic forecasts were calibrated, in the sense that (for example) the 80% prediction intervals contained the truth about 80% of the time. We illustrate our method with results from Madagascar (a typical country with steadily improving life expectancy), Latvia (a country that has had a mortality crisis), and Japan (a leading country). We also show aggregated results for South Asia, a region with eight countries. Free publicly available R software packages called bayesLife and bayesDem are available to implement the method. PMID:23494599

  2. Automatic Prediction of Protein 3D Structures by Probabilistic Multi-template Homology Modeling.

    PubMed

    Meier, Armin; Söding, Johannes

    2015-10-01

    Homology modeling predicts the 3D structure of a query protein based on the sequence alignment with one or more template proteins of known structure. Its great importance for biological research is owed to its speed, simplicity, reliability and wide applicability, covering more than half of the residues in protein sequence space. Although multiple templates have been shown to generally increase model quality over single templates, the information from multiple templates has so far been combined using empirically motivated, heuristic approaches. We present here a rigorous statistical framework for multi-template homology modeling. First, we find that the query proteins' atomic distance restraints can be accurately described by two-component Gaussian mixtures. This insight allowed us to apply the standard laws of probability theory to combine restraints from multiple templates. Second, we derive theoretically optimal weights to correct for the redundancy among related templates. Third, a heuristic template selection strategy is proposed. We improve the average GDT-ha model quality score by 11% over single template modeling and by 6.5% over a conventional multi-template approach on a set of 1000 query proteins. Robustness with respect to wrong constraints is likewise improved. We have integrated our multi-template modeling approach with the popular MODELLER homology modeling software in our free HHpred server http://toolkit.tuebingen.mpg.de/hhpred and also offer open source software for running MODELLER with the new restraints at https://bitbucket.org/soedinglab/hh-suite.

  3. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  4. Multi-parametric variational data assimilation for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  5. Probabilistic fatigue life prediction of metallic and composite materials

    NASA Astrophysics Data System (ADS)

    Xiang, Yibing

    Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.

  6. Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.

    2017-12-01

    It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.

  7. Boosting Probabilistic Graphical Model Inference by Incorporating Prior Knowledge from Multiple Sources

    PubMed Central

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291

  8. Development of probabilistic thinking-oriented learning tools for probability materials at junior high school students

    NASA Astrophysics Data System (ADS)

    Sari, Dwi Ivayana; Hermanto, Didik

    2017-08-01

    This research is a developmental research of probabilistic thinking-oriented learning tools for probability materials at ninth grade students. This study is aimed to produce a good probabilistic thinking-oriented learning tools. The subjects were IX-A students of MTs Model Bangkalan. The stages of this development research used 4-D development model which has been modified into define, design and develop. Teaching learning tools consist of lesson plan, students' worksheet, learning teaching media and students' achievement test. The research instrument used was a sheet of learning tools validation, a sheet of teachers' activities, a sheet of students' activities, students' response questionnaire and students' achievement test. The result of those instruments were analyzed descriptively to answer research objectives. The result was teaching learning tools in which oriented to probabilistic thinking of probability at ninth grade students which has been valid. Since teaching and learning tools have been revised based on validation, and after experiment in class produced that teachers' ability in managing class was effective, students' activities were good, students' responses to the learning tools were positive and the validity, sensitivity and reliability category toward achievement test. In summary, this teaching learning tools can be used by teacher to teach probability for develop students' probabilistic thinking.

  9. Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework

    NASA Astrophysics Data System (ADS)

    Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.

    2018-01-01

    Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).

  10. Modeling the Effect of Reward Amount on Probability Discounting

    ERIC Educational Resources Information Center

    Myerson, Joel; Green, Leonard; Morris, Joshua

    2011-01-01

    The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect…

  11. Probabilistic Priority Message Checking Modeling Based on Controller Area Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Although the probabilistic model checking tool called PRISM has been applied in many communication systems, such as wireless local area network, Bluetooth, and ZigBee, the technique is not used in a controller area network (CAN). In this paper, we use PRISM to model the mechanism of priority messages for CAN because the mechanism has allowed CAN to become the leader in serial communication for automobile and industry control. Through modeling CAN, it is easy to analyze the characteristic of CAN for further improving the security and efficiency of automobiles. The Markov chain model helps us to model the behaviour of priority messages.

  12. Probabilistic Modeling of Intracranial Pressure Effects on Optic Nerve Biomechanics

    NASA Technical Reports Server (NTRS)

    Ethier, C. R.; Feola, Andrew J.; Raykin, Julia; Myers, Jerry G.; Nelson, Emily S.; Samuels, Brian C.

    2016-01-01

    Altered intracranial pressure (ICP) is involved/implicated in several ocular conditions: papilledema, glaucoma and Visual Impairment and Intracranial Pressure (VIIP) syndrome. The biomechanical effects of altered ICP on optic nerve head (ONH) tissues in these conditions are uncertain but likely important. We have quantified ICP-induced deformations of ONH tissues, using finite element (FE) and probabilistic modeling (Latin Hypercube Simulations (LHS)) to consider a range of tissue properties and relevant pressures.

  13. Geothermal probabilistic cost study

    NASA Technical Reports Server (NTRS)

    Orren, L. H.; Ziman, G. M.; Jones, S. C.; Lee, T. K.; Noll, R.; Wilde, L.; Sadanand, V.

    1981-01-01

    A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model was used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents was analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance were examined.

  14. MATILDA Version 2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain - Part I

    DTIC Science & Technology

    2017-03-13

    support of airborne laser designator use during test and training exercises on military ranges. The initial MATILDA tool, MATILDA PRO Version-1.6.1...was based on the 2007 PRA model developed to perform range safety clearances for the UK Thermal Imaging Airborne Laser Designator (TIALD) system...AFRL Technical Reports. This Technical Report, designated Part I, con- tains documentation of the computational procedures for probabilistic fault

  15. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  16. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  17. Assessing the empirical validity of the "take-the-best" heuristic as a model of human probabilistic inference.

    PubMed

    Bröder, A

    2000-09-01

    The boundedly rational 'Take-The-Best" heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.

  18. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  19. Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making

    PubMed Central

    Drugowitsch, Jan; Pouget, Alexandre

    2012-01-01

    Optimal binary perceptual decision making requires accumulation of evidence in the form of a probability distribution that specifies the probability of the choices being correct given the evidence so far. Reward rates can then be maximized by stopping the accumulation when the confidence about either option reaches a threshold. Behavioral and neuronal evidence suggests that humans and animals follow such a probabilitistic decision strategy, although its neural implementation has yet to be fully characterized. Here we show that that diffusion decision models and attractor network models provide an approximation to the optimal strategy only under certain circumstances. In particular, neither model type is sufficiently flexible to encode the reliability of both the momentary and the accumulated evidence, which is a pre-requisite to accumulate evidence of time-varying reliability. Probabilistic population codes, in contrast, can encode these quantities and, as a consequence, have the potential to implement the optimal strategy accurately. PMID:22884815

  20. Probabilistic Human Health Risk Assessment of Chemical Mixtures: Hydro-Toxicological Interactions and Controlling Factors

    NASA Astrophysics Data System (ADS)

    Henri, C.; Fernandez-Garcia, D.; de Barros, F.

    2014-12-01

    Improper disposals of hazardous wastes in most industrial countries give rise to severe groundwater contamination problems that can lead to adverse health effects in humans. Therefore risk assessment methods play an important role in population protection by (1) quantifying the impact on human health of an aquifer contamination and (2) aiding the decision making process of to better manage our groundwater resources. Many reactive components such as chlorinated solvent or nitrate potentially experience attenuation processes under common geochemical conditions. Based on this, monitored natural attenuation has become nowadays an attractive remediation solution. However, in some cases, intermediate degradation products can constitute noxious chemical compounds before reaching a harmless chemical form. In these cases, the joint effect of advection-dispersion transport and the species-dependent kinetic reactions and toxicity will dictate the relative importance of the degradation byproducts to the total risk. This renders the interpretation of risk a non-trivial task. In this presentation, we quantify, through a probabilistic framework, the human health risk posed by a chemical mixture in a heterogeneous aquifer. This work focuses on a Perchloroethylene contamination problem followed by the first-order production/biodegradation of its daughter species Trichloroethylene, Dichloroethylene and Vinyl Chlorine that is known to be highly toxic. Uncertainty on the hydraulic conductivity field is considered through a Monte Carlo scheme. A comparative description of human health risk metrics as a function of aquifer heterogeneity and contaminant injection mode is provided by means of a spatial characterization of the lower-order statistical moments and empirical probability density functions of both individual and total risks. Interestingly, we show that the human health risk of a chemical mixture is mainly controlled by a modified Damköhler number that express the joint effect of contaminant mean travel times, reaction kinetics and chemical toxicity. From this, connectivity is shown to produce a significant and non-trivial impact on risk measures. The impact of connectivity can be potentially beneficial or detrimental on the magnitude of human health risk depending on the modified Damköhler number.

  1. A geomorphic approach to 100-year floodplain mapping for the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Jafarzadegan, Keighobad; Merwade, Venkatesh; Saksena, Siddharth

    2018-06-01

    Floodplain mapping using hydrodynamic models is difficult in data scarce regions. Additionally, using hydrodynamic models to map floodplain over large stream network can be computationally challenging. Some of these limitations of floodplain mapping using hydrodynamic modeling can be overcome by developing computationally efficient statistical methods to identify floodplains in large and ungauged watersheds using publicly available data. This paper proposes a geomorphic model to generate probabilistic 100-year floodplain maps for the Conterminous United States (CONUS). The proposed model first categorizes the watersheds in the CONUS into three classes based on the height of the water surface corresponding to the 100-year flood from the streambed. Next, the probability that any watershed in the CONUS belongs to one of these three classes is computed through supervised classification using watershed characteristics related to topography, hydrography, land use and climate. The result of this classification is then fed into a probabilistic threshold binary classifier (PTBC) to generate the probabilistic 100-year floodplain maps. The supervised classification algorithm is trained by using the 100-year Flood Insurance Rated Maps (FIRM) from the U.S. Federal Emergency Management Agency (FEMA). FEMA FIRMs are also used to validate the performance of the proposed model in areas not included in the training. Additionally, HEC-RAS model generated flood inundation extents are used to validate the model performance at fifteen sites that lack FEMA maps. Validation results show that the probabilistic 100-year floodplain maps, generated by proposed model, match well with both FEMA and HEC-RAS generated maps. On average, the error of predicted flood extents is around 14% across the CONUS. The high accuracy of the validation results shows the reliability of the geomorphic model as an alternative approach for fast and cost effective delineation of 100-year floodplains for the CONUS.

  2. Disruption of the Right Temporoparietal Junction Impairs Probabilistic Belief Updating.

    PubMed

    Mengotti, Paola; Dombert, Pascasie L; Fink, Gereon R; Vossel, Simone

    2017-05-31

    Generating and updating probabilistic models of the environment is a fundamental modus operandi of the human brain. Although crucial for various cognitive functions, the neural mechanisms of these inference processes remain to be elucidated. Here, we show the causal involvement of the right temporoparietal junction (rTPJ) in updating probabilistic beliefs and we provide new insights into the chronometry of the process by combining online transcranial magnetic stimulation (TMS) with computational modeling of behavioral responses. Female and male participants performed a modified location-cueing paradigm, where false information about the percentage of cue validity (%CV) was provided in half of the experimental blocks to prompt updating of prior expectations. Online double-pulse TMS over rTPJ 300 ms (but not 50 ms) after target appearance selectively decreased participants' updating of false prior beliefs concerning %CV, reflected in a decreased learning rate of a Rescorla-Wagner model. Online TMS over rTPJ also impacted on participants' explicit beliefs, causing them to overestimate %CV. These results confirm the involvement of rTPJ in updating of probabilistic beliefs, thereby advancing our understanding of this area's function during cognitive processing. SIGNIFICANCE STATEMENT Contemporary views propose that the brain maintains probabilistic models of the world to minimize surprise about sensory inputs. Here, we provide evidence that the right temporoparietal junction (rTPJ) is causally involved in this process. Because neuroimaging has suggested that rTPJ is implicated in divergent cognitive domains, the demonstration of an involvement in updating internal models provides a novel unifying explanation for these findings. We used computational modeling to characterize how participants change their beliefs after new observations. By interfering with rTPJ activity through online transcranial magnetic stimulation, we showed that participants were less able to update prior beliefs with TMS delivered at 300 ms after target onset. Copyright © 2017 the authors 0270-6474/17/375419-10$15.00/0.

  3. Exploring the calibration of a wind forecast ensemble for energy applications

    NASA Astrophysics Data System (ADS)

    Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne

    2015-04-01

    In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.

  4. Quick probabilistic binary image matching: changing the rules of the game

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2016-09-01

    A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.

  5. Probabilistic analysis for fatigue strength degradation of materials

    NASA Technical Reports Server (NTRS)

    Royce, Lola

    1989-01-01

    This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.

  6. Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge.

    PubMed

    Lau, Jey Han; Clark, Alexander; Lappin, Shalom

    2017-07-01

    The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic. Copyright © 2016 Cognitive Science Society, Inc.

  7. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling

    PubMed Central

    Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139

  8. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    PubMed

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  9. Analysis of the French insurance market exposure to floods: a stochastic model combining river overflow and surface runoff

    NASA Astrophysics Data System (ADS)

    Moncoulon, D.; Labat, D.; Ardon, J.; Onfroy, T.; Leblois, E.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.

    2013-07-01

    The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible but not yet occurred flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2012 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90% of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of CCR claim database has shown that approximately 45% of the insured flood losses are located inside the floodplains and 45% outside. 10% other percent are due to seasurge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: generation of fictive river flows based on the historical records of the river gauge network and generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (MACIF) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).

  10. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  11. Effects of varying the step particle distribution on a probabilistic transport model

    NASA Astrophysics Data System (ADS)

    Bouzat, S.; Farengo, R.

    2005-12-01

    The consequences of varying the step particle distribution on a probabilistic transport model, which captures the basic features of transport in plasmas and was recently introduced in Ref. 1 [B. Ph. van Milligen et al., Phys. Plasmas 11, 2272 (2004)], are studied. Different superdiffusive transport mechanisms generated by a family of distributions with algebraic decays (Tsallis distributions) are considered. It is observed that the possibility of changing the superdiffusive transport mechanism improves the flexibility of the model for describing different situations. The use of the model to describe the low (L) and high (H) confinement modes is also analyzed.

  12. Probabilistic computer model of optimal runway turnoffs

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.

    1985-01-01

    Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.

  13. Automated Assessment of Disease Progression in Acute Myeloid Leukemia by Probabilistic Analysis of Flow Cytometry Data

    PubMed Central

    Rajwa, Bartek; Wallace, Paul K.; Griffiths, Elizabeth A.; Dundar, Murat

    2017-01-01

    Objective Flow cytometry (FC) is a widely acknowledged technology in diagnosis of acute myeloid leukemia (AML) and has been indispensable in determining progression of the disease. Although FC plays a key role as a post-therapy prognosticator and evaluator of therapeutic efficacy, the manual analysis of cytometry data is a barrier to optimization of reproducibility and objectivity. This study investigates the utility of our recently introduced non-parametric Bayesian framework in accurately predicting the direction of change in disease progression in AML patients using FC data. Methods The highly flexible non-parametric Bayesian model based on the infinite mixture of infinite Gaussian mixtures is used for jointly modeling data from multiple FC samples to automatically identify functionally distinct cell populations and their local realizations. Phenotype vectors are obtained by characterizing each sample by the proportions of recovered cell populations, which are in turn used to predict the direction of change in disease progression for each patient. Results We used 200 diseased and non-diseased immunophenotypic panels for training and tested the system with 36 additional AML cases collected at multiple time points. The proposed framework identified the change in direction of disease progression with accuracies of 90% (9 out of 10) for relapsing cases and 100% (26 out of 26) for the remaining cases. Conclusions We believe that these promising results are an important first step towards the development of automated predictive systems for disease monitoring and continuous response evaluation. Significance Automated measurement and monitoring of therapeutic response is critical not only for objective evaluation of disease status prognosis but also for timely assessment of treatment strategies. PMID:27416585

  14. Discrete-Slots Models of Visual Working-Memory Response Times

    PubMed Central

    Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.

    2014-01-01

    Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956

  15. Probabilistic flood damage modelling at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2014-05-01

    Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.

  16. Comparative assessment of analytical approaches to quantify the risk for introduction of rare animal diseases: the example of avian influenza in Spain.

    PubMed

    Sánchez-Vizcaíno, Fernando; Perez, Andrés; Martínez-López, Beatriz; Sánchez-Vizcaíno, José Manuel

    2012-08-01

    Trade of animals and animal products imposes an uncertain and variable risk for exotic animal diseases introduction into importing countries. Risk analysis provides importing countries with an objective, transparent, and internationally accepted method for assessing that risk. Over the last decades, European Union countries have conducted probabilistic risk assessments quite frequently to quantify the risk for rare animal diseases introduction into their territories. Most probabilistic animal health risk assessments have been typically classified into one-level and multilevel binomial models. One-level models are more simple than multilevel models because they assume that animals or products originate from one single population. However, it is unknown whether such simplification may result in substantially different results compared to those obtained through the use of multilevel models. Here, data used on a probabilistic multilevel binomial model formulated to assess the risk for highly pathogenic avian influenza introduction into Spain were reanalyzed using a one-level binomial model and their outcomes were compared. An alternative ordinal model is also proposed here, which makes use of simpler assumptions and less information compared to those required by traditional one-level and multilevel approaches. Results suggest that, at least under certain circumstances, results of the one-level and ordinal approaches are similar to those obtained using multilevel models. Consequently, we argue that, when data are insufficient to run traditional probabilistic models, the ordinal approach presented here may be a suitable alternative to rank exporting countries in terms of the risk that they impose for the spread of rare animal diseases into disease-free countries. © 2012 Society for Risk Analysis.

  17. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Liu, Li; Chen, Jinhu

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image withmore » the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated that liver structures of CBCT with artifacts can be extracted accurately for following adaptive radiation therapy. This work is supported by National Natural Science Foundation of China (No. 61201441), Research Fund for Excellent Young and Middle-aged Scientists of Shandong Province (No. BS2012DX038), Project of Shandong Province Higher Educational Science and Technology Program (No. J12LN23), Jinan youth science and technology star (No.20120109)« less

  18. On a true value of risk

    NASA Astrophysics Data System (ADS)

    Kozine, Igor

    2018-04-01

    The paper suggests looking on probabilistic risk quantities and concepts through the prism of accepting one of the views: whether a true value of risk exists or not. It is argued that discussions until now have been primarily focused on closely related topics that are different from the topic of the current paper. The paper examines operational consequences of adhering to each of the views and contrasts them. It is demonstrated that operational differences on how and what probabilistic measures can be assessed and how they can be interpreted appear tangible. In particular, this concerns prediction intervals, the use of Byes rule, models of complete ignorance, hierarchical models of uncertainty, assignment of probabilities over possibility space and interpretation of derived probabilistic measures. Behavioural implications of favouring the either view are also briefly described.

  19. The Diagnostic Challenge Competition: Probabilistic Techniques for Fault Diagnosis in Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Ricks, Brian W.; Mengshoel, Ole J.

    2009-01-01

    Reliable systems health management is an important research area of NASA. A health management system that can accurately and quickly diagnose faults in various on-board systems of a vehicle will play a key role in the success of current and future NASA missions. We introduce in this paper the ProDiagnose algorithm, a diagnostic algorithm that uses a probabilistic approach, accomplished with Bayesian Network models compiled to Arithmetic Circuits, to diagnose these systems. We describe the ProDiagnose algorithm, how it works, and the probabilistic models involved. We show by experimentation on two Electrical Power Systems based on the ADAPT testbed, used in the Diagnostic Challenge Competition (DX 09), that ProDiagnose can produce results with over 96% accuracy and less than 1 second mean diagnostic time.

  20. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  1. Towards a multilevel cognitive probabilistic representation of space

    NASA Astrophysics Data System (ADS)

    Tapus, Adriana; Vasudevan, Shrihari; Siegwart, Roland

    2005-03-01

    This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with "Object Graph Models" (OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchical representation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions.

  2. Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue

    NASA Astrophysics Data System (ADS)

    Kree, P.; Soize, C.

    The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.

  3. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  4. GENERAL: A modified weighted probabilistic cellular automaton traffic flow model

    NASA Astrophysics Data System (ADS)

    Zhuang, Qian; Jia, Bin; Li, Xin-Gang

    2009-08-01

    This paper modifies the weighted probabilistic cellular automaton model (Li X L, Kuang H, Song T, et al 2008 Chin. Phys. B 17 2366) which considered a diversity of traffic behaviors under real traffic situations induced by various driving characters and habits. In the new model, the effects of the velocity at the last time step and drivers' desire for acceleration are taken into account. The fundamental diagram, spatial-temporal diagram, and the time series of one-minute data are analyzed. The results show that this model reproduces synchronized flow. Finally, it simulates the on-ramp system with the proposed model. Some characteristics including the phase diagram are studied.

  5. Identification of failure type in corroded pipelines: a bayesian probabilistic approach.

    PubMed

    Breton, T; Sanchez-Gheno, J C; Alamilla, J L; Alvarez-Ramirez, J

    2010-07-15

    Spillover of hazardous materials from transport pipelines can lead to catastrophic events with serious and dangerous environmental impact, potential fire events and human fatalities. The problem is more serious for large pipelines when the construction material is under environmental corrosion conditions, as in the petroleum and gas industries. In this way, predictive models can provide a suitable framework for risk evaluation, maintenance policies and substitution procedure design that should be oriented to reduce increased hazards. This work proposes a bayesian probabilistic approach to identify and predict the type of failure (leakage or rupture) for steel pipelines under realistic corroding conditions. In the first step of the modeling process, the mechanical performance of the pipe is considered for establishing conditions under which either leakage or rupture failure can occur. In the second step, experimental burst tests are used to introduce a mean probabilistic boundary defining a region where the type of failure is uncertain. In the boundary vicinity, the failure discrimination is carried out with a probabilistic model where the events are considered as random variables. In turn, the model parameters are estimated with available experimental data and contrasted with a real catastrophic event, showing good discrimination capacity. The results are discussed in terms of policies oriented to inspection and maintenance of large-size pipelines in the oil and gas industry. 2010 Elsevier B.V. All rights reserved.

  6. Opportunities of probabilistic flood loss models

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno

    2016-04-01

    Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  7. A Probabilistic Tsunami Hazard Study of the Auckland Region, Part II: Inundation Modelling and Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Lane, E. M.; Gillibrand, P. A.; Wang, X.; Power, W.

    2013-09-01

    Regional source tsunamis pose a potentially devastating hazard to communities and infrastructure on the New Zealand coast. But major events are very uncommon. This dichotomy of infrequent but potentially devastating hazards makes realistic assessment of the risk challenging. Here, we describe a method to determine a probabilistic assessment of the tsunami hazard by regional source tsunamis with an "Average Recurrence Interval" of 2,500-years. The method is applied to the east Auckland region of New Zealand. From an assessment of potential regional tsunamigenic events over 100,000 years, the inundation of the Auckland region from the worst 100 events is modelled using a hydrodynamic model and probabilistic inundation depths on a 2,500-year time scale were determined. Tidal effects on the potential inundation were included by coupling the predicted wave heights with the probability density function of tidal heights at the inundation site. Results show that the more exposed northern section of the east coast and outer islands in the Hauraki Gulf face the greatest hazard from regional tsunamis in the Auckland region. Incorporating tidal effects into predictions of inundation reduced the predicted hazard compared to modelling all the tsunamis arriving at high tide giving a more accurate hazard assessment on the specified time scale. This study presents the first probabilistic analysis of dynamic modelling of tsunami inundation for the New Zealand coast and as such provides the most comprehensive assessment of tsunami inundation of the Auckland region from regional source tsunamis available to date.

  8. Probabilistic grammatical model for helix‐helix contact site classification

    PubMed Central

    2013-01-01

    Background Hidden Markov Models power many state‐of‐the‐art tools in the field of protein bioinformatics. While excelling in their tasks, these methods of protein analysis do not convey directly information on medium‐ and long‐range residue‐residue interactions. This requires an expressive power of at least context‐free grammars. However, application of more powerful grammar formalisms to protein analysis has been surprisingly limited. Results In this work, we present a probabilistic grammatical framework for problem‐specific protein languages and apply it to classification of transmembrane helix‐helix pairs configurations. The core of the model consists of a probabilistic context‐free grammar, automatically inferred by a genetic algorithm from only a generic set of expert‐based rules and positive training samples. The model was applied to produce sequence based descriptors of four classes of transmembrane helix‐helix contact site configurations. The highest performance of the classifiers reached AUCROC of 0.70. The analysis of grammar parse trees revealed the ability of representing structural features of helix‐helix contact sites. Conclusions We demonstrated that our probabilistic context‐free framework for analysis of protein sequences outperforms the state of the art in the task of helix‐helix contact site classification. However, this is achieved without necessarily requiring modeling long range dependencies between interacting residues. A significant feature of our approach is that grammar rules and parse trees are human‐readable. Thus they could provide biologically meaningful information for molecular biologists. PMID:24350601

  9. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  10. Finding models to detect Alzheimer's disease by fusing structural and neuropsychological information

    NASA Astrophysics Data System (ADS)

    Giraldo, Diana L.; García-Arteaga, Juan D.; Velasco, Nelson; Romero, Eduardo

    2015-12-01

    Alzheimer's disease (AD) is a neurodegenerative disease that affects higher brain functions. Initial diagnosis of AD is based on the patient's clinical history and a battery of neuropsychological tests. The accuracy of the diagnosis is highly dependent on the examiner's skills and on the evolution of a variable clinical frame. This work presents an automatic strategy that learns probabilistic brain models for different stages of the disease, reducing the complexity, parameter adjustment and computational costs. The proposed method starts by setting a probabilistic class description using the information stored in the neuropsychological test, followed by constructing the different structural class models using membership values from the learned probabilistic functions. These models are then used as a reference frame for the classification problem: a new case is assigned to a particular class simply by projecting to the different models. The validation was performed using a leave-one-out cross-validation, two classes were used: Normal Control (NC) subjects and patients diagnosed with mild AD. In this experiment it is possible to achieve a sensibility and specificity of 80% and 79% respectively.

  11. Probabilistic accounting of uncertainty in forecasts of species distributions under climate change

    Treesearch

    Seth J. Wenger; Nicholas A. Som; Daniel C. Dauwalter; Daniel J. Isaak; Helen M. Neville; Charles H. Luce; Jason B. Dunham; Michael K. Young; Kurt D. Fausch; Bruce E. Rieman

    2013-01-01

    Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing...

  12. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    ERIC Educational Resources Information Center

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  13. Recovering a Probabilistic Knowledge Structure by Constraining Its Parameter Space

    ERIC Educational Resources Information Center

    Stefanutti, Luca; Robusto, Egidio

    2009-01-01

    In the Basic Local Independence Model (BLIM) of Doignon and Falmagne ("Knowledge Spaces," Springer, Berlin, 1999), the probabilistic relationship between the latent knowledge states and the observable response patterns is established by the introduction of a pair of parameters for each of the problems: a lucky guess probability and a careless…

  14. Structural reliability methods: Code development status

    NASA Astrophysics Data System (ADS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  15. Structural reliability methods: Code development status

    NASA Technical Reports Server (NTRS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-01-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  16. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  17. Discounting of Monetary Rewards that are Both Delayed and Probabilistic: Delay and Probability Combine Multiplicatively, not Additively

    PubMed Central

    Vanderveldt, Ariana; Green, Leonard; Myerson, Joel

    2014-01-01

    The value of an outcome is affected both by the delay until its receipt (delay discounting) and by the likelihood of its receipt (probability discounting). Despite being well-described by the same hyperboloid function, delay and probability discounting involve fundamentally different processes, as revealed, for example, by the differential effects of reward amount. Previous research has focused on the discounting of delayed and probabilistic rewards separately, with little research examining more complex situations in which rewards are both delayed and probabilistic. In two experiments, participants made choices between smaller rewards that were both immediate and certain and larger rewards that were both delayed and probabilistic. Analyses revealed significant interactions between delay and probability factors inconsistent with an additive model. In contrast, a hyperboloid discounting model in which delay and probability were combined multiplicatively provided an excellent fit to the data. These results suggest that the hyperboloid is a good descriptor of decision making in complicated monetary choice situations like those people encounter in everyday life. PMID:24933696

  18. Methods for Probabilistic Fault Diagnosis: An Electrical Power System Case Study

    NASA Technical Reports Server (NTRS)

    Ricks, Brian W.; Mengshoel, Ole J.

    2009-01-01

    Health management systems that more accurately and quickly diagnose faults that may occur in different technical systems on-board a vehicle will play a key role in the success of future NASA missions. We discuss in this paper the diagnosis of abrupt continuous (or parametric) faults within the context of probabilistic graphical models, more specifically Bayesian networks that are compiled to arithmetic circuits. This paper extends our previous research, within the same probabilistic setting, on diagnosis of abrupt discrete faults. Our approach and diagnostic algorithm ProDiagnose are domain-independent; however we use an electrical power system testbed called ADAPT as a case study. In one set of ADAPT experiments, performed as part of the 2009 Diagnostic Challenge, our system turned out to have the best performance among all competitors. In a second set of experiments, we show how we have recently further significantly improved the performance of the probabilistic model of ADAPT. While these experiments are obtained for an electrical power system testbed, we believe they can easily be transitioned to real-world systems, thus promising to increase the success of future NASA missions.

  19. Reasoning about Probabilistic Security Using Task-PIOAs

    NASA Astrophysics Data System (ADS)

    Jaggard, Aaron D.; Meadows, Catherine; Mislove, Michael; Segala, Roberto

    Task-structured probabilistic input/output automata (Task-PIOAs) are concurrent probabilistic automata that, among other things, have been used to provide a formal framework for the universal composability paradigms of protocol security. One of their advantages is that that they allow one to distinguish high-level nondeterminism that can affect the outcome of the protocol, from low-level choices, which can't. We present an alternative approach to analyzing the structure of Task-PIOAs that relies on ordered sets. We focus on two of the components that are required to define and apply Task-PIOAs: discrete probability theory and automata theory. We believe our development gives insight into the structure of Task-PIOAs and how they can be utilized to model crypto-protocols. We illustrate our approach with an example from anonymity, an area that has not previously been addressed using Task-PIOAs. We model Chaum's Dining Cryptographers Protocol at a level that does not require cryptographic primitives in the analysis. We show via this example how our approach can leverage a proof of security in the case a principal behaves deterministically to prove security when that principal behaves probabilistically.

  20. Evaluation of Sex-Specific Movement Patterns in Judo Using Probabilistic Neural Networks.

    PubMed

    Miarka, Bianca; Sterkowicz-Przybycien, Katarzyna; Fukuda, David H

    2017-10-01

    The purpose of the present study was to create a probabilistic neural network to clarify the understanding of movement patterns in international judo competitions by gender. Analysis of 773 male and 638 female bouts was utilized to identify movements during the approach, gripping, attack (including biomechanical designations), groundwork, defense, and pause phases. Probabilistic neural network and chi-square (χ 2 ) tests modeled and compared frequencies (p ≤ .05). Women (mean [interquartile range]: 9.9 [4; 14]) attacked more than men (7.0 [3; 10]) while attempting a greater number of arm/leg lever (women: 2.7 [1; 6]; men: 4.0 [0; 4]) and trunk/leg lever (women: 0.8 [0; 1]; men: 2.4 [0; 4]) techniques but fewer maximal length-moment arm techniques (women: 0.7 [0; 1]; men: 1.0 [0; 2]). Male athletes displayed one-handed gripping of the back and sleeve, whereas female athletes executed a greater number of groundwork techniques. An optimized probabilistic neural network model, using patterns from the gripping, attack, groundwork, and pause phases, produced an overall prediction accuracy of 76% for discrimination between men and women.

  1. Sustainable Odds: Towards Quantitative Decision Support when Relevant Probabilities are not Available

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2012-04-01

    There is, at present, no attractive foundation for quantitative probabilistic decision support in the face of model inadequacy, or given ambiguity (deep uncertainty) regarding the relative likelihood of various outcomes, known or unknown. True model error arguably precludes the extraction of objective probabilities from an ensemble of model runs drawn from an available (inadequate) model class, while the acknowledgement of incomplete understanding precludes the justified use of (if not the very formation of) an individual's subjective probabilities. An alternative approach based on Sustainable Odds is proposed and investigated. Sustainable Odds differ from "fair odds" (and are easily distinguished any claim which implying well defined probabilities) as the probabilities implied by sustainable odds summed over all outcomes is expected to exceed one. Traditionally, a person's fair odds are found by identifying the probability level at which one would happily accept either side of a bet, thus the probabilities implied by fair odds always sum to one. Knowing that one has incomplete information and perhaps even erroneous beliefs, there is no compelling reason a rational agent should accept the constraint implied by "fair odds" in any bet. Rather, a rational agent might insist on longer odds both on the event and against the event in order to account for acknowledged ignorance. Let probabilistic odds imply any set of odds for which the implied probabilities sum to one; once model error is acknowledged can one rationally demand non-probabilistic odds? The danger of using fair odds (or probabilities) in decision making is illustrated by considering the risk of ruin a cooperative insurance scheme using probabilistic odds is exposed to. Cases where knowing merely that the insurer's model is imperfect, and nothing else, is sufficient to place bets which drive the insurer to an unexpectedly early ruin are presented. Methodologies which allow the insurer to avoid this early ruin are explored; those which prevent early ruin are said to provide "sustainable odds", and it is suggested that these must be non-probabilistic. The aim here is not for the insurance cooperative to make a profit in the long run (or to form a book in any one round) but rather to increase the chance that the cooperative will not go bust, merely breaking even in the long run and thereby continuing to provide a service. In the perfect model scenario, with complete knowledge of all uncertainties and unlimited computational resources, fair odds may prove to be sustainable. The implications these results hold in the case of games against nature, which is perhaps a more relevant context for decision makers concerned with geophysical systems, are discussed. The claim that acknowledged model error makes fair (probabilistic) odds an irrational aim is considered, as are the challenges of working within the framework of sustainable (but non-probabilistic) odds.

  2. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  3. SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating

    PubMed Central

    Lee, Young-Joo; Cho, Soojin

    2016-01-01

    Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125

  4. A Probabilistic Model of Meter Perception: Simulating Enculturation.

    PubMed

    van der Weij, Bastiaan; Pearce, Marcus T; Honing, Henkjan

    2017-01-01

    Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.

  5. Probabilistic versus deterministic skill in predicting the western North Pacific-East Asian summer monsoon variability with multimodel ensembles

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Qun; Yang, Dejian; Xie, Qian; Zhang, Yaocun; Ren, Xuejuan; Tang, Youmin

    2017-04-01

    Based on historical forecasts of three quasi-operational multi-model ensemble (MME) systems, this study assesses the superiority of coupled MME over contributing single-model ensembles (SMEs) and over uncoupled atmospheric MME in predicting the Western North Pacific-East Asian summer monsoon variability. The probabilistic and deterministic forecast skills are measured by Brier skill score (BSS) and anomaly correlation (AC), respectively. A forecast-format dependent MME superiority over SMEs is found. The probabilistic forecast skill of the MME is always significantly better than that of each SME, while the deterministic forecast skill of the MME can be lower than that of some SMEs. The MME superiority arises from both the model diversity and the ensemble size increase in the tropics, and primarily from the ensemble size increase in the subtropics. The BSS is composed of reliability and resolution, two attributes characterizing probabilistic forecast skill. The probabilistic skill increase of the MME is dominated by the dramatic improvement in reliability, while resolution is not always improved, similar to AC. A monotonic resolution-AC relationship is further found and qualitatively explained, whereas little relationship can be identified between reliability and AC. It is argued that the MME's success in improving the reliability arises from an effective reduction of the overconfidence in forecast distributions. Moreover, it is examined that the seasonal predictions with coupled MME are more skillful than those with the uncoupled atmospheric MME forced by persisting sea surface temperature (SST) anomalies, since the coupled MME has better predicted the SST anomaly evolution in three key regions.

  6. The Mass Distribution of Companions to Low-mass White Dwarfs

    NASA Astrophysics Data System (ADS)

    Andrews, Jeff J.; Price-Whelan, Adrian M.; Agüeros, Marcel A.

    2014-12-01

    Measuring the masses of companions to single-line spectroscopic binary stars is (in general) not possible because of the unknown orbital plane inclination. Even when the mass of the visible star can be measured, only a lower limit can be placed on the mass of the unseen companion. However, since these inclination angles should be isotropically distributed, for a large enough, unbiased sample, the companion mass distribution can be deconvolved from the distribution of observables. In this work, we construct a hierarchical probabilistic model to infer properties of unseen companion stars given observations of the orbital period and projected radial velocity of the primary star. We apply this model to three mock samples of low-mass white dwarfs (LMWDs; M <~ 0.45 M ⊙) and a sample of post-common-envelope binaries. We use a mixture of two Gaussians to model the WD and neutron star (NS) companion mass distributions. Our model successfully recovers the initial parameters of these test data sets. We then apply our model to 55 WDs in the extremely low-mass (ELM) WD Survey. Our maximum a posteriori model for the WD companion population has a mean mass μWD = 0.74 M ⊙, with a standard deviation σWD = 0.24 M ⊙. Our model constrains the NS companion fraction f NS to be <16% at 68% confidence. We make samples from the posterior distribution publicly available so that future observational efforts may compute the NS probability for newly discovered LMWDs.

  7. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  8. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  9. Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.

    PubMed

    Marino, Dale J; Starr, Thomas B

    2007-12-01

    A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.

  10. The research on medical image classification algorithm based on PLSA-BOW model.

    PubMed

    Cao, C H; Cao, H L

    2016-04-29

    With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.

  11. Bayesian probabilistic population projections for all countries.

    PubMed

    Raftery, Adrian E; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K

    2012-08-28

    Projections of countries' future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950-1990 are used for estimation, and applied to predict 1990-2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20-64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades.

  12. Specifying design conservatism: Worst case versus probabilistic analysis

    NASA Technical Reports Server (NTRS)

    Miles, Ralph F., Jr.

    1993-01-01

    Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.

  13. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Králik, Juraj, E-mail: juraj.kralik@stuba.sk

    2016-06-08

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  14. Probabilistic safety assessment of the design of a tall buildings under the extreme load

    NASA Astrophysics Data System (ADS)

    Králik, Juraj

    2016-06-01

    The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.

  15. Probabilistic forecasting of extreme weather events based on extreme value theory

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert

    2016-04-01

    Extreme events in weather and climate such as high wind gusts, heavy precipitation or extreme temperatures are commonly associated with high impacts on both environment and society. Forecasting extreme weather events is difficult, and very high-resolution models are needed to describe explicitly extreme weather phenomena. A prediction system for such events should therefore preferably be probabilistic in nature. Probabilistic forecasts and state estimations are nowadays common in the numerical weather prediction community. In this work, we develop a new probabilistic framework based on extreme value theory that aims to provide early warnings up to several days in advance. We consider the combined events when an observation variable Y (for instance wind speed) exceeds a high threshold y and its corresponding deterministic forecasts X also exceeds a high forecast threshold y. More specifically two problems are addressed:} We consider pairs (X,Y) of extreme events where X represents a deterministic forecast, and Y the observation variable (for instance wind speed). More specifically two problems are addressed: Given a high forecast X=x_0, what is the probability that Y>y? In other words: provide inference on the conditional probability: [ Pr{Y>y|X=x_0}. ] Given a probabilistic model for Problem 1, what is the impact on the verification analysis of extreme events. These problems can be solved with bivariate extremes (Coles, 2001), and the verification analysis in (Ferro, 2007). We apply the Ramos and Ledford (2009) parametric model for bivariate tail estimation of the pair (X,Y). The model accommodates different types of extremal dependence and asymmetry within a parsimonious representation. Results are presented using the ensemble reforecast system of the European Centre of Weather Forecasts (Hagedorn, 2008). Coles, S. (2001) An Introduction to Statistical modelling of Extreme Values. Springer-Verlag.Ferro, C.A.T. (2007) A probability model for verifying deterministic forecasts of extreme events. Wea. Forecasting {22}, 1089-1100.Hagedorn, R. (2008) Using the ECMWF reforecast dataset to calibrate EPS forecasts. ECMWF Newsletter, {117}, 8-13.Ramos, A., Ledford, A. (2009) A new class of models for bivariate joint tails. J.R. Statist. Soc. B {71}, 219-241.

  16. Model based inference from microvascular measurements: Combining experimental measurements and model predictions using a Bayesian probabilistic approach

    PubMed Central

    Rasmussen, Peter M.; Smith, Amy F.; Sakadžić, Sava; Boas, David A.; Pries, Axel R.; Secomb, Timothy W.; Østergaard, Leif

    2017-01-01

    Objective In vivo imaging of the microcirculation and network-oriented modeling have emerged as powerful means of studying microvascular function and understanding its physiological significance. Network-oriented modeling may provide the means of summarizing vast amounts of data produced by high-throughput imaging techniques in terms of key, physiological indices. To estimate such indices with sufficient certainty, however, network-oriented analysis must be robust to the inevitable presence of uncertainty due to measurement errors as well as model errors. Methods We propose the Bayesian probabilistic data analysis framework as a means of integrating experimental measurements and network model simulations into a combined and statistically coherent analysis. The framework naturally handles noisy measurements and provides posterior distributions of model parameters as well as physiological indices associated with uncertainty. Results We applied the analysis framework to experimental data from three rat mesentery networks and one mouse brain cortex network. We inferred distributions for more than five hundred unknown pressure and hematocrit boundary conditions. Model predictions were consistent with previous analyses, and remained robust when measurements were omitted from model calibration. Conclusion Our Bayesian probabilistic approach may be suitable for optimizing data acquisition and for analyzing and reporting large datasets acquired as part of microvascular imaging studies. PMID:27987383

  17. Control of Stochastic Master Equation Models of Genetic Regulatory Networks by Approximating Their Average Behavior

    NASA Astrophysics Data System (ADS)

    Umut Caglar, Mehmet; Pal, Ranadip

    2010-10-01

    The central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid.'' However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of data in the cellular level and probabilistic nature of interactions. Probabilistic models like Stochastic Master Equation (SME) or deterministic models like differential equations (DE) can be used to analyze these types of interactions. SME models based on chemical master equation (CME) can provide detailed representation of genetic regulatory system, but their use is restricted by the large data requirements and computational costs of calculations. The differential equations models on the other hand, have low calculation costs and much more adequate to generate control procedures on the system; but they are not adequate to investigate the probabilistic nature of interactions. In this work the success of the mapping between SME and DE is analyzed, and the success of a control policy generated by DE model with respect to SME model is examined. Index Terms--- Stochastic Master Equation models, Differential Equation Models, Control Policy Design, Systems biology

  18. Computational Everyday Life Human Behavior Model as Servicable Knowledge

    NASA Astrophysics Data System (ADS)

    Motomura, Yoichi; Nishida, Yoshifumi

    A project called `Open life matrix' is not only a research activity but also real problem solving as an action research. This concept is realized by large-scale data collection, probabilistic causal structure model construction and information service providing using the model. One concrete outcome of this project is childhood injury prevention activity in new team consist of hospital, government, and many varieties of researchers. The main result from the project is a general methodology to apply probabilistic causal structure models as servicable knowledge for action research. In this paper, the summary of this project and future direction to emphasize action research driven by artificial intelligence technology are discussed.

  19. Spatial probabilistic pulsatility model for enhancing photoplethysmographic imaging systems

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Clausi, David A.; Wong, Alexander

    2016-11-01

    Photoplethysmographic imaging (PPGI) is a widefield noncontact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Although spatial context can provide insight into physiologically relevant sampling locations, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with a large demographic variability (11/12 female/male, age 11 to 60 years, BMI 16.4 to 35.1 kg·m-2). Using time-synchronized ground-truth blood pulse waveforms, spatial correlation priors were computed and projected into a coaligned importance-weighted Cartesian space. A modified Parzen-Rosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation (W=35,p<0.01) and spectral SNR (W=31,p<0.01) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate [r2=0.9619, error (μ,σ)=(0.52,1.69) bpm].

  20. Surrogate modeling of joint flood risk across coastal watersheds

    NASA Astrophysics Data System (ADS)

    Bass, Benjamin; Bedient, Philip

    2018-03-01

    This study discusses the development and performance of a rapid prediction system capable of representing the joint rainfall-runoff and storm surge flood response of tropical cyclones (TCs) for probabilistic risk analysis. Due to the computational demand required for accurately representing storm surge with the high-fidelity ADvanced CIRCulation (ADCIRC) hydrodynamic model and its coupling with additional numerical models to represent rainfall-runoff, a surrogate or statistical model was trained to represent the relationship between hurricane wind- and pressure-field characteristics and their peak joint flood response typically determined from physics based numerical models. This builds upon past studies that have only evaluated surrogate models for predicting peak surge, and provides the first system capable of probabilistically representing joint flood levels from TCs. The utility of this joint flood prediction system is then demonstrated by improving upon probabilistic TC flood risk products, which currently account for storm surge but do not take into account TC associated rainfall-runoff. Results demonstrate the source apportionment of rainfall-runoff versus storm surge and highlight that slight increases in flood risk levels may occur due to the interaction between rainfall-runoff and storm surge as compared to the Federal Emergency Management Association's (FEMAs) current practices.

  1. Probing the Small-scale Structure in Strongly Lensed Systems via Transdimensional Inference

    NASA Astrophysics Data System (ADS)

    Daylan, Tansu; Cyr-Racine, Francis-Yan; Diaz Rivero, Ana; Dvorkin, Cora; Finkbeiner, Douglas P.

    2018-02-01

    Strong lensing is a sensitive probe of the small-scale density fluctuations in the Universe. We implement a pipeline to model strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows one to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps—for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope data set, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog. The implemented software, Probabilistic Cataloger (PCAT) is made publicly available at https://github.com/tdaylan/pcat.

  2. Predictability of short-range forecasting: a multimodel approach

    NASA Astrophysics Data System (ADS)

    García-Moya, Jose-Antonio; Callado, Alfons; Escribà, Pau; Santos, Carlos; Santos-Muñoz, Daniel; Simarro, Juan

    2011-05-01

    Numerical weather prediction (NWP) models (including mesoscale) have limitations when it comes to dealing with severe weather events because extreme weather is highly unpredictable, even in the short range. A probabilistic forecast based on an ensemble of slightly different model runs may help to address this issue. Among other ensemble techniques, Multimodel ensemble prediction systems (EPSs) are proving to be useful for adding probabilistic value to mesoscale deterministic models. A Multimodel Short Range Ensemble Prediction System (SREPS) focused on forecasting the weather up to 72 h has been developed at the Spanish Meteorological Service (AEMET). The system uses five different limited area models (LAMs), namely HIRLAM (HIRLAM Consortium), HRM (DWD), the UM (UKMO), MM5 (PSU/NCAR) and COSMO (COSMO Consortium). These models run with initial and boundary conditions provided by five different global deterministic models, namely IFS (ECMWF), UM (UKMO), GME (DWD), GFS (NCEP) and CMC (MSC). AEMET-SREPS (AE) validation on the large-scale flow, using ECMWF analysis, shows a consistent and slightly underdispersive system. For surface parameters, the system shows high skill forecasting binary events. 24-h precipitation probabilistic forecasts are verified using an up-scaling grid of observations from European high-resolution precipitation networks, and compared with ECMWF-EPS (EC).

  3. Groundwater Remediation using Bayesian Information-Gap Decision Theory

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2016-12-01

    Probabilistic analyses of groundwater remediation scenarios frequently fail because the probability of an adverse, unanticipated event occurring is often high. In general, models of flow and transport in contaminated aquifers are always simpler than reality. Further, when a probabilistic analysis is performed, probability distributions are usually chosen more for convenience than correctness. The Bayesian Information-Gap Decision Theory (BIGDT) was designed to mitigate the shortcomings of the models and probabilistic decision analyses by leveraging a non-probabilistic decision theory - information-gap decision theory. BIGDT considers possible models that have not been explicitly enumerated and does not require us to commit to a particular probability distribution for model and remediation-design parameters. Both the set of possible models and the set of possible probability distributions grow as the degree of uncertainty increases. The fundamental question that BIGDT asks is "How large can these sets be before a particular decision results in an undesirable outcome?". The decision that allows these sets to be the largest is considered to be the best option. In this way, BIGDT enables robust decision-support for groundwater remediation problems. Here we apply BIGDT to in a representative groundwater remediation scenario where different options for hydraulic containment and pump & treat are being considered. BIGDT requires many model runs and for complex models high-performance computing resources are needed. These analyses are carried out on synthetic problems, but are applicable to real-world problems such as LANL site contaminations. BIGDT is implemented in Julia (a high-level, high-performance dynamic programming language for technical computing) and is part of the MADS framework (http://mads.lanl.gov/ and https://github.com/madsjulia/Mads.jl).

  4. Probabilistically modeling lava flows with MOLASSES

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Connor, C.; Gallant, E.

    2017-12-01

    Modeling lava flows through Cellular Automata methods enables a computationally inexpensive means to quickly forecast lava flow paths and ultimate areal extents. We have developed a lava flow simulator, MOLASSES, that forecasts lava flow inundation over an elevation model from a point source eruption. This modular code can be implemented in a deterministic fashion with given user inputs that will produce a single lava flow simulation. MOLASSES can also be implemented in a probabilistic fashion where given user inputs define parameter distributions that are randomly sampled to create many lava flow simulations. This probabilistic approach enables uncertainty in input data to be expressed in the model results and MOLASSES outputs a probability map of inundation instead of a determined lava flow extent. Since the code is comparatively fast, we use it probabilistically to investigate where potential vents are located that may impact specific sites and areas, as well as the unconditional probability of lava flow inundation of sites or areas from any vent. We have validated the MOLASSES code to community-defined benchmark tests and to the real world lava flows at Tolbachik (2012-2013) and Pico do Fogo (2014-2015). To determine the efficacy of the MOLASSES simulator at accurately and precisely mimicking the inundation area of real flows, we report goodness of fit using both model sensitivity and the Positive Predictive Value, the latter of which is a Bayesian posterior statistic. Model sensitivity is often used in evaluating lava flow simulators, as it describes how much of the lava flow was successfully modeled by the simulation. We argue that the positive predictive value is equally important in determining how good a simulator is, as it describes the percentage of the simulation space that was actually inundated by lava.

  5. A probabilistic method for streamflow projection and associated uncertainty analysis in a data sparse alpine region

    NASA Astrophysics Data System (ADS)

    Ren, Weiwei; Yang, Tao; Shi, Pengfei; Xu, Chong-yu; Zhang, Ke; Zhou, Xudong; Shao, Quanxi; Ciais, Philippe

    2018-06-01

    Climate change imposes profound influence on regional hydrological cycle and water security in many alpine regions worldwide. Investigating regional climate impacts using watershed scale hydrological models requires a large number of input data such as topography, meteorological and hydrological data. However, data scarcity in alpine regions seriously restricts evaluation of climate change impacts on water cycle using conventional approaches based on global or regional climate models, statistical downscaling methods and hydrological models. Therefore, this study is dedicated to development of a probabilistic model to replace the conventional approaches for streamflow projection. The probabilistic model was built upon an advanced Bayesian Neural Network (BNN) approach directly fed by the large-scale climate predictor variables and tested in a typical data sparse alpine region, the Kaidu River basin in Central Asia. Results show that BNN model performs better than the general methods across a number of statistical measures. The BNN method with flexible model structures by active indicator functions, which reduce the dependence on the initial specification for the input variables and the number of hidden units, can work well in a data limited region. Moreover, it can provide more reliable streamflow projections with a robust generalization ability. Forced by the latest bias-corrected GCM scenarios, streamflow projections for the 21st century under three RCP emission pathways were constructed and analyzed. Briefly, the proposed probabilistic projection approach could improve runoff predictive ability over conventional methods and provide better support to water resources planning and management under data limited conditions as well as enable a facilitated climate change impact analysis on runoff and water resources in alpine regions worldwide.

  6. Development of a Probabilistic Decision-Support Model to Forecast Coastal Resilience

    NASA Astrophysics Data System (ADS)

    Wilson, K.; Safak, I.; Brenner, O.; Lentz, E. E.; Hapke, C. J.

    2016-02-01

    Site-specific forecasts of coastal change are a valuable management tool in preparing for and assessing storm-driven impacts in coastal areas. More specifically, understanding the likelihood of storm impacts, recovery following events, and the alongshore variability of both is central in evaluating vulnerability and resiliency of barrier islands. We introduce a probabilistic modeling framework that integrates hydrodynamic, anthropogenic, and morphologic components of the barrier system to evaluate coastal change at Fire Island, New York. The model is structured on a Bayesian network (BN), which utilizes observations to learn statistical relationships between system variables. In addition to predictive ability, probabilistic models convey the level of confidence associated with a prediction, an important consideration for coastal managers. Our model predicts the likelihood of morphologic change on the upper beach based on several decades of beach monitoring data. A coupled hydrodynamic BN combines probabilistic and deterministic modeling approaches; by querying nearly two decades of nested-grid wave simulations that account for both distant swells and local seas, we produce scenarios of event and seasonal wave climates. The wave scenarios of total water level - a sum of run up, surge and tide - and anthropogenic modification are the primary drivers of morphologic change in our model structure. Preliminary results show the hydrodynamic BN is able to reproduce time series of total water levels, a critical validation process before generating scenarios, and forecasts of geomorphic change over three month intervals are up to 70% accurate. Predictions of storm-induced change and recovery are linked to evaluate zones of persistent vulnerability or resilience and will help managers target restoration efforts, identify areas most vulnerable to habitat degradation, and highlight resilient zones that may best support relocation of critical infrastructure.

  7. Probability or Reasoning: Current Thinking and Realistic Strategies for Improved Medical Decisions

    PubMed Central

    2017-01-01

    A prescriptive model approach in decision making could help achieve better diagnostic accuracy in clinical practice through methods that are less reliant on probabilistic assessments. Various prescriptive measures aimed at regulating factors that influence heuristics and clinical reasoning could support clinical decision-making process. Clinicians could avoid time-consuming decision-making methods that require probabilistic calculations. Intuitively, they could rely on heuristics to obtain an accurate diagnosis in a given clinical setting. An extensive literature review of cognitive psychology and medical decision-making theory was performed to illustrate how heuristics could be effectively utilized in daily practice. Since physicians often rely on heuristics in realistic situations, probabilistic estimation might not be a useful tool in everyday clinical practice. Improvements in the descriptive model of decision making (heuristics) may allow for greater diagnostic accuracy. PMID:29209469

  8. Probabilistic structural analysis of space propulsion system LOX post

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Rajagopal, K. R.; Ho, H. W.; Cunniff, J. M.

    1990-01-01

    The probabilistic structural analysis program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress; Cruse et al., 1988) is applied to characterize the dynamic loading and response of the Space Shuttle main engine (SSME) LOX post. The design and operation of the SSME are reviewed; the LOX post structure is described; and particular attention is given to the generation of composite load spectra, the finite-element model of the LOX post, and the steps in the NESSUS structural analysis. The results are presented in extensive tables and graphs, and it is shown that NESSUS correctly predicts the structural effects of changes in the temperature loading. The probabilistic approach also facilitates (1) damage assessments for a given failure model (based on gas temperature, heat-shield gap, and material properties) and (2) correlation of the gas temperature with operational parameters such as engine thrust.

  9. Probability or Reasoning: Current Thinking and Realistic Strategies for Improved Medical Decisions.

    PubMed

    Nantha, Yogarabindranath Swarna

    2017-11-01

    A prescriptive model approach in decision making could help achieve better diagnostic accuracy in clinical practice through methods that are less reliant on probabilistic assessments. Various prescriptive measures aimed at regulating factors that influence heuristics and clinical reasoning could support clinical decision-making process. Clinicians could avoid time-consuming decision-making methods that require probabilistic calculations. Intuitively, they could rely on heuristics to obtain an accurate diagnosis in a given clinical setting. An extensive literature review of cognitive psychology and medical decision-making theory was performed to illustrate how heuristics could be effectively utilized in daily practice. Since physicians often rely on heuristics in realistic situations, probabilistic estimation might not be a useful tool in everyday clinical practice. Improvements in the descriptive model of decision making (heuristics) may allow for greater diagnostic accuracy.

  10. Reconstructing cerebrovascular networks under local physiological constraints by integer programming

    DOE PAGES

    Rempfler, Markus; Schneider, Matthias; Ielacqua, Giovanna D.; ...

    2015-04-23

    We introduce a probabilistic approach to vessel network extraction that enforces physiological constraints on the vessel structure. The method accounts for both image evidence and geometric relationships between vessels by solving an integer program, which is shown to yield the maximum a posteriori (MAP) estimate to the probabilistic model. Starting from an over-connected network, it is pruning vessel stumps and spurious connections by evaluating the local geometry and the global connectivity of the graph. We utilize a high-resolution micro computed tomography (µCT) dataset of a cerebrovascular corrosion cast to obtain a reference network and learn the prior distributions of ourmore » probabilistic model. As a result, we perform experiments on micro magnetic resonance angiography (µMRA) images of mouse brains and discuss properties of the networks obtained under different tracking and pruning approaches.« less

  11. Stochastic model for fatigue crack size and cost effective design decisions. [for aerospace structures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1975-01-01

    This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.

  12. A simulation model for probabilistic analysis of Space Shuttle abort modes

    NASA Technical Reports Server (NTRS)

    Hage, R. T.

    1993-01-01

    A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.

  13. Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networksmore » and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.« less

  14. Spatial planning using probabilistic flood maps

    NASA Astrophysics Data System (ADS)

    Alfonso, Leonardo; Mukolwe, Micah; Di Baldassarre, Giuliano

    2015-04-01

    Probabilistic flood maps account for uncertainty in flood inundation modelling and convey a degree of certainty in the outputs. Major sources of uncertainty include input data, topographic data, model structure, observation data and parametric uncertainty. Decision makers prefer less ambiguous information from modellers; this implies that uncertainty is suppressed to yield binary flood maps. Though, suppressing information may potentially lead to either surprise or misleading decisions. Inclusion of uncertain information in the decision making process is therefore desirable and transparent. To this end, we utilise the Prospect theory and information from a probabilistic flood map to evaluate potential decisions. Consequences related to the decisions were evaluated using flood risk analysis. Prospect theory explains how choices are made given options for which probabilities of occurrence are known and accounts for decision makers' characteristics such as loss aversion and risk seeking. Our results show that decision making is pronounced when there are high gains and loss, implying higher payoffs and penalties, therefore a higher gamble. Thus the methodology may be appropriately considered when making decisions based on uncertain information.

  15. Effects of shipping on marine acoustic habitats in Canadian Arctic estimated via probabilistic modeling and mapping.

    PubMed

    Aulanier, Florian; Simard, Yvan; Roy, Nathalie; Gervaise, Cédric; Bandet, Marion

    2017-12-15

    Canadian Arctic and Subarctic regions experience a rapid decrease of sea ice accompanied with increasing shipping traffic. The resulting time-space changes in shipping noise are studied for four key regions of this pristine environment, for 2013 traffic conditions and a hypothetical tenfold traffic increase. A probabilistic modeling and mapping framework, called Ramdam, which integrates the intrinsic variability and uncertainties of shipping noise and its effects on marine habitats, is developed and applied. A substantial transformation of soundscapes is observed in areas where shipping noise changes from present occasional-transient contributor to a dominant noise source. Examination of impacts on low-frequency mammals within ecologically and biologically significant areas reveals that shipping noise has the potential to trigger behavioral responses and masking in the future, although no risk of temporary or permanent hearing threshold shifts is noted. Such probabilistic modeling and mapping is strategic in marine spatial planning of this emerging noise issues. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  16. On the probabilistic structure of water age: Probabilistic Water Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porporato, Amilcare; Calabrese, Salvatore

    We report the age distribution of water in hydrologic systems has received renewed interest recently, especially in relation to watershed response to rainfall inputs. The purpose of this contribution is first to draw attention to existing theories of age distributions in population dynamics, fluid mechanics and stochastic groundwater, and in particular to the McKendrick-von Foerster equation and its generalizations and solutions. A second and more important goal is to clarify that, when hydrologic fluxes are modeled by means of time-varying stochastic processes, the age distributions must themselves be treated as random functions. Once their probabilistic structure is obtained, it canmore » be used to characterize the variability of age distributions in real systems and thus help quantify the inherent uncertainty in the field determination of water age. Finally, we illustrate these concepts with reference to a stochastic storage model, which has been used as a minimalist model of soil moisture and streamflow dynamics.« less

  17. On the probabilistic structure of water age: Probabilistic Water Age

    DOE PAGES

    Porporato, Amilcare; Calabrese, Salvatore

    2015-04-23

    We report the age distribution of water in hydrologic systems has received renewed interest recently, especially in relation to watershed response to rainfall inputs. The purpose of this contribution is first to draw attention to existing theories of age distributions in population dynamics, fluid mechanics and stochastic groundwater, and in particular to the McKendrick-von Foerster equation and its generalizations and solutions. A second and more important goal is to clarify that, when hydrologic fluxes are modeled by means of time-varying stochastic processes, the age distributions must themselves be treated as random functions. Once their probabilistic structure is obtained, it canmore » be used to characterize the variability of age distributions in real systems and thus help quantify the inherent uncertainty in the field determination of water age. Finally, we illustrate these concepts with reference to a stochastic storage model, which has been used as a minimalist model of soil moisture and streamflow dynamics.« less

  18. Modeling the Evolution of Beliefs Using an Attentional Focus Mechanism

    PubMed Central

    Marković, Dimitrije; Gläscher, Jan; Bossaerts, Peter; O’Doherty, John; Kiebel, Stefan J.

    2015-01-01

    For making decisions in everyday life we often have first to infer the set of environmental features that are relevant for the current task. Here we investigated the computational mechanisms underlying the evolution of beliefs about the relevance of environmental features in a dynamical and noisy environment. For this purpose we designed a probabilistic Wisconsin card sorting task (WCST) with belief solicitation, in which subjects were presented with stimuli composed of multiple visual features. At each moment in time a particular feature was relevant for obtaining reward, and participants had to infer which feature was relevant and report their beliefs accordingly. To test the hypothesis that attentional focus modulates the belief update process, we derived and fitted several probabilistic and non-probabilistic behavioral models, which either incorporate a dynamical model of attentional focus, in the form of a hierarchical winner-take-all neuronal network, or a diffusive model, without attention-like features. We used Bayesian model selection to identify the most likely generative model of subjects’ behavior and found that attention-like features in the behavioral model are essential for explaining subjects’ responses. Furthermore, we demonstrate a method for integrating both connectionist and Bayesian models of decision making within a single framework that allowed us to infer hidden belief processes of human subjects. PMID:26495984

  19. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  20. New ShakeMaps for Georgia Resulting from Collaboration with EMME

    NASA Astrophysics Data System (ADS)

    Kvavadze, N.; Tsereteli, N. S.; Varazanashvili, O.; Alania, V.

    2015-12-01

    Correct assessment of probabilistic seismic hazard and risks maps are first step for advance planning and action to reduce seismic risk. Seismic hazard maps for Georgia were calculated based on modern approach that was developed in the frame of EMME (Earthquake Modl for Middle east region) project. EMME was one of GEM's successful endeavors at regional level. With EMME and GEM assistance, regional models were analyzed to identify the information and additional work needed for the preparation national hazard models. Probabilistic seismic hazard map (PSH) provides the critical bases for improved building code and construction. The most serious deficiency in PSH assessment for the territory of Georgia is the lack of high-quality ground motion data. Due to this an initial hybrid empirical ground motion model is developed for PGA and SA at selected periods. An application of these coefficients for ground motion models have been used in probabilistic seismic hazard assessment. Obtained results of seismic hazard maps show evidence that there were gaps in seismic hazard assessment and the present normative seismic hazard map needed a careful recalculation.

  1. Verification and Optimal Control of Context-Sensitive Probabilistic Boolean Networks Using Model Checking and Polynomial Optimization

    PubMed Central

    Hiraishi, Kunihiko

    2014-01-01

    One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766

  2. Probabilistic Common Spatial Patterns for Multichannel EEG Analysis

    PubMed Central

    Chen, Zhe; Gao, Xiaorong; Li, Yuanqing; Brown, Emery N.; Gao, Shangkai

    2015-01-01

    Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task. PMID:26005228

  3. On the limits of probabilistic forecasting in nonlinear time series analysis II: Differential entropy.

    PubMed

    Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki

    2017-08-01

    In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.

  4. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  5. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  6. Decision making generalized by a cumulative probability weighting function

    NASA Astrophysics Data System (ADS)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  7. Interference in the classical probabilistic model and its representation in complex Hilbert space

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei Yu.

    2005-10-01

    The notion of a context (complex of physical conditions, that is to say: specification of the measurement setup) is basic in this paper.We show that the main structures of quantum theory (interference of probabilities, Born's rule, complex probabilistic amplitudes, Hilbert state space, representation of observables by operators) are present already in a latent form in the classical Kolmogorov probability model. However, this model should be considered as a calculus of contextual probabilities. In our approach it is forbidden to consider abstract context independent probabilities: “first context and only then probability”. We construct the representation of the general contextual probabilistic dynamics in the complex Hilbert space. Thus dynamics of the wave function (in particular, Schrödinger's dynamics) can be considered as Hilbert space projections of a realistic dynamics in a “prespace”. The basic condition for representing of the prespace-dynamics is the law of statistical conservation of energy-conservation of probabilities. In general the Hilbert space projection of the “prespace” dynamics can be nonlinear and even irreversible (but it is always unitary). Methods developed in this paper can be applied not only to quantum mechanics, but also to classical statistical mechanics. The main quantum-like structures (e.g., interference of probabilities) might be found in some models of classical statistical mechanics. Quantum-like probabilistic behavior can be demonstrated by biological systems. In particular, it was recently found in some psychological experiments.

  8. Probabilistic modelling of human exposure to intense sweeteners in Italian teenagers: validation and sensitivity analysis of a probabilistic model including indicators of market share and brand loyalty.

    PubMed

    Arcella, D; Soggiu, M E; Leclercq, C

    2003-10-01

    For the assessment of exposure to food-borne chemicals, the most commonly used methods in the European Union follow a deterministic approach based on conservative assumptions. Over the past few years, to get a more realistic view of exposure to food chemicals, risk managers are getting more interested in the probabilistic approach. Within the EU-funded 'Monte Carlo' project, a stochastic model of exposure to chemical substances from the diet and a computer software program were developed. The aim of this paper was to validate the model with respect to the intake of saccharin from table-top sweeteners and cyclamate from soft drinks by Italian teenagers with the use of the software and to evaluate the impact of the inclusion/exclusion of indicators on market share and brand loyalty through a sensitivity analysis. Data on food consumption and the concentration of sweeteners were collected. A food frequency questionnaire aimed at identifying females who were high consumers of sugar-free soft drinks and/or of table top sweeteners was filled in by 3982 teenagers living in the District of Rome. Moreover, 362 subjects participated in a detailed food survey by recording, at brand level, all foods and beverages ingested over 12 days. Producers were asked to provide the intense sweeteners' concentration of sugar-free products. Results showed that consumer behaviour with respect to brands has an impact on exposure assessments. Only probabilistic models that took into account indicators of market share and brand loyalty met the validation criteria.

  9. Probabilistic regional climate projection in Japan using a regression model with CMIP5 multi-model ensemble experiments

    NASA Astrophysics Data System (ADS)

    Ishizaki, N. N.; Dairaku, K.; Ueno, G.

    2016-12-01

    We have developed a statistical downscaling method for estimating probabilistic climate projection using CMIP5 multi general circulation models (GCMs). A regression model was established so that the combination of weights of GCMs reflects the characteristics of the variation of observations at each grid point. Cross validations were conducted to select GCMs and to evaluate the regression model to avoid multicollinearity. By using spatially high resolution observation system, we conducted statistically downscaled probabilistic climate projections with 20-km horizontal grid spacing. Root mean squared errors for monthly mean air surface temperature and precipitation estimated by the regression method were the smallest compared with the results derived from a simple ensemble mean of GCMs and a cumulative distribution function based bias correction method. Projected changes in the mean temperature and precipitation were basically similar to those of the simple ensemble mean of GCMs. Mean precipitation was generally projected to increase associated with increased temperature and consequent increased moisture content in the air. Weakening of the winter monsoon may affect precipitation decrease in some areas. Temperature increase in excess of 4 K was expected in most areas of Japan in the end of 21st century under RCP8.5 scenario. The estimated probability of monthly precipitation exceeding 300 mm would increase around the Pacific side during the summer and the Japan Sea side during the winter season. This probabilistic climate projection based on the statistical method can be expected to bring useful information to the impact studies and risk assessments.

  10. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    NASA Astrophysics Data System (ADS)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more accurate measure of forecast uncertainty that could result in better decision-making. It offers different level of abstractions to help with the recalibration of the RAR method. It also has an inspection tool that displays the selected analogs, their observations and statistical data. It gives the users access to inner parts of the method, unveiling hidden information. References [GR05] GNEITING T., RAFTERY A. E.: Weather forecasting with ensemble methods. Science 310, 5746, 248-249, 2005. [KAL03] KALNAY E.: Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2003. [PH06] PALMER T., HAGEDORN R.: Predictability of weather and climate. Cambridge University Press, 2006. [HW06] HAMILL T. M., WHITAKER J. S.: Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Monthly Weather Review 134, 11, 3209-3229, 2006. [DE06] DEITRICK S., EDSALL R.: The influence of uncertainty visualization on decision making: An empirical evaluation. Springer, 2006. [KMS08] KEIM D. A., MANSMANN F., SCHNEIDEWIND J., THOMAS J., ZIEGLER H.: Visual analytics: Scope and challenges. Springer, 2008.

  11. Probabilistic Tsunami Hazard Assessment: the Seaside, Oregon Pilot Study

    NASA Astrophysics Data System (ADS)

    Gonzalez, F. I.; Geist, E. L.; Synolakis, C.; Titov, V. V.

    2004-12-01

    A pilot study of Seaside, Oregon is underway, to develop methodologies for probabilistic tsunami hazard assessments that can be incorporated into Flood Insurance Rate Maps (FIRMs) developed by FEMA's National Flood Insurance Program (NFIP). Current NFIP guidelines for tsunami hazard assessment rely on the science, technology and methodologies developed in the 1970s; although generally regarded as groundbreaking and state-of-the-art for its time, this approach is now superseded by modern methods that reflect substantial advances in tsunami research achieved in the last two decades. In particular, post-1990 technical advances include: improvements in tsunami source specification; improved tsunami inundation models; better computational grids by virtue of improved bathymetric and topographic databases; a larger database of long-term paleoseismic and paleotsunami records and short-term, historical earthquake and tsunami records that can be exploited to develop improved probabilistic methodologies; better understanding of earthquake recurrence and probability models. The NOAA-led U.S. National Tsunami Hazard Mitigation Program (NTHMP), in partnership with FEMA, USGS, NSF and Emergency Management and Geotechnical agencies of the five Pacific States, incorporates these advances into site-specific tsunami hazard assessments for coastal communities in Alaska, California, Hawaii, Oregon and Washington. NTHMP hazard assessment efforts currently focus on developing deterministic, "credible worst-case" scenarios that provide valuable guidance for hazard mitigation and emergency management. The NFIP focus, on the other hand, is on actuarial needs that require probabilistic hazard assessments such as those that characterize 100- and 500-year flooding events. There are clearly overlaps in NFIP and NTHMP objectives. NTHMP worst-case scenario assessments that include an estimated probability of occurrence could benefit the NFIP; NFIP probabilistic assessments of 100- and 500-yr events could benefit the NTHMP. The joint NFIP/NTHMP pilot study at Seaside, Oregon is organized into three closely related components: Probabilistic, Modeling, and Impact studies. Probabilistic studies (Geist, et al., this session) are led by the USGS and include the specification of near- and far-field seismic tsunami sources and their associated probabilities. Modeling studies (Titov, et al., this session) are led by NOAA and include the development and testing of a Seaside tsunami inundation model and an associated database of computed wave height and flow velocity fields. Impact studies (Synolakis, et al., this session) are led by USC and include the computation and analyses of indices for the categorization of hazard zones. The results of each component study will be integrated to produce a Seaside tsunami hazard map. This presentation will provide a brief overview of the project and an update on progress, while the above-referenced companion presentations will provide details on the methods used and the preliminary results obtained by each project component.

  12. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  13. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  14. Application of Probability Methods to Assess Crash Modeling Uncertainty

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Stockwell, Alan E.; Hardy, Robin C.

    2007-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stress-strain behaviors, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the effects of finite element modeling assumptions on the predicted responses. The vertical drop test of a Fokker F28 fuselage section will be the focus of this paper. The results of a probabilistic analysis using finite element simulations will be compared with experimental data.

  15. Spatiotemporal movement planning and rapid adaptation for manual interaction.

    PubMed

    Huber, Markus; Kupferberg, Aleksandra; Lenz, Claus; Knoll, Alois; Brandt, Thomas; Glasauer, Stefan

    2013-01-01

    Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot's joint configuration. Modeling the task was based on the assumption that the receiver's decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.

  16. Joint Probabilistic Projection of Female and Male Life Expectancy

    PubMed Central

    Raftery, Adrian E.; Lalic, Nevena; Gerland, Patrick

    2014-01-01

    BACKGROUND The United Nations (UN) produces population projections for all countries every two years. These are used by international organizations, governments, the private sector and researchers for policy planning, for monitoring development goals, as inputs to economic and environmental models, and for social and health research. The UN is considering producing fully probabilistic population projections, for which joint probabilistic projections of future female and male life expectancy at birth are needed. OBJECTIVE We propose a methodology for obtaining joint probabilistic projections of female and male life expectancy at birth. METHODS We first project female life expectancy using a one-sex method for probabilistic projection of life expectancy. We then project the gap between female and male life expectancy. We propose an autoregressive model for the gap in a future time period for a particular country, which is a function of female life expectancy and a t-distributed random perturbation. This method takes into account mortality data limitations, is comparable across countries, and accounts for shocks. We estimate all parameters based on life expectancy estimates for 1950–2010. The methods are implemented in the bayesLife and bayesPop R packages. RESULTS We evaluated our model using out-of-sample projections for the period 1995–2010, and found that our method performed better than several possible alternatives. CONCLUSIONS We find that the average gap between female and male life expectancy has been increasing for female life expectancy below 75, and decreasing for female life expectancy above 75. Our projections of the gap are lower than the UN’s 2008 projections for most countries and so lead to higher projections of male life expectancy. PMID:25580082

  17. Probabilistic Fracture Mechanics of Reactor Pressure Vessels with Populations of Flaws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Backman, Marie; Williams, Paul

    This report documents recent progress in developing a tool that uses the Grizzly and RAVEN codes to perform probabilistic fracture mechanics analyses of reactor pressure vessels in light water reactor nuclear power plants. The Grizzly code is being developed with the goal of creating a general tool that can be applied to study a variety of degradation mechanisms in nuclear power plant components. Because of the central role of the reactor pressure vessel (RPV) in a nuclear power plant, particular emphasis is being placed on developing capabilities to model fracture in embrittled RPVs to aid in the process surrounding decisionmore » making relating to life extension of existing plants. A typical RPV contains a large population of pre-existing flaws introduced during the manufacturing process. The use of probabilistic techniques is necessary to assess the likelihood of crack initiation at one or more of these flaws during a transient event. This report documents development and initial testing of a capability to perform probabilistic fracture mechanics of large populations of flaws in RPVs using reduced order models to compute fracture parameters. The work documented here builds on prior efforts to perform probabilistic analyses of a single flaw with uncertain parameters, as well as earlier work to develop deterministic capabilities to model the thermo-mechanical response of the RPV under transient events, and compute fracture mechanics parameters at locations of pre-defined flaws. The capabilities developed as part of this work provide a foundation for future work, which will develop a platform that provides the flexibility needed to consider scenarios that cannot be addressed with the tools used in current practice.« less

  18. A temporal-spatial postprocessing model for probabilistic run-off forecast. With a case study from Ulla-Førre with five catchments and ten lead times

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.

    2012-04-01

    This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.

  19. How Evolution May Work Through Curiosity-Driven Developmental Process.

    PubMed

    Oudeyer, Pierre-Yves; Smith, Linda B

    2016-04-01

    Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. Copyright © 2016 Cognitive Science Society, Inc.

  20. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  1. Three validation metrics for automated probabilistic image segmentation of brain tumours

    PubMed Central

    Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.

    2005-01-01

    SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482

  2. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    PubMed

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  3. The Global Tsunami Model (GTM)

    NASA Astrophysics Data System (ADS)

    Thio, H. K.; Løvholt, F.; Harbitz, C. B.; Polet, J.; Lorito, S.; Basili, R.; Volpe, M.; Romano, F.; Selva, J.; Piatanesi, A.; Davies, G.; Griffin, J.; Baptista, M. A.; Omira, R.; Babeyko, A. Y.; Power, W. L.; Salgado Gálvez, M.; Behrens, J.; Yalciner, A. C.; Kanoglu, U.; Pekcan, O.; Ross, S.; Parsons, T.; LeVeque, R. J.; Gonzalez, F. I.; Paris, R.; Shäfer, A.; Canals, M.; Fraser, S. A.; Wei, Y.; Weiss, R.; Zaniboni, F.; Papadopoulos, G. A.; Didenkulova, I.; Necmioglu, O.; Suppasri, A.; Lynett, P. J.; Mokhtari, M.; Sørensen, M.; von Hillebrandt-Andrade, C.; Aguirre Ayerbe, I.; Aniel-Quiroga, Í.; Guillas, S.; Macias, J.

    2016-12-01

    The large tsunami disasters of the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.

  4. The Global Tsunami Model (GTM)

    NASA Astrophysics Data System (ADS)

    Lorito, S.; Basili, R.; Harbitz, C. B.; Løvholt, F.; Polet, J.; Thio, H. K.

    2017-12-01

    The tsunamis occurred worldwide in the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but often disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.

  5. Probabilistic topic modeling for the analysis and classification of genomic sequences

    PubMed Central

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  6. Development and comparison of metrics for evaluating climate models and estimation of projection uncertainty

    NASA Astrophysics Data System (ADS)

    Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko

    2017-04-01

    The COMEPRO project (Comparison of Metrics for Probabilistic Climate Change Projections of Mediterranean Precipitation), funded by the Deutsche Forschungsgemeinschaft (DFG), is dedicated to the development of new evaluation metrics for state-of-the-art climate models. Further, we analyze implications for probabilistic projections of climate change. This study focuses on the results of 4-field matrix metrics. Here, six different approaches are compared. We evaluate 24 models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), 40 of CMIP5 and 18 of the Coordinated Regional Downscaling Experiment (CORDEX). In addition to the annual and seasonal precipitation the mean temperature is analysed. We consider both 50-year trend and climatological mean for the second half of the 20th century. For the probabilistic projections of climate change A1b, A2 (CMIP3) and RCP4.5, RCP8.5 (CMIP5,CORDEX) scenarios are used. The eight main study areas are located in the Mediterranean. However, we apply our metrics to globally distributed regions as well. The metrics show high simulation quality of temperature trend and both precipitation and temperature mean for most climate models and study areas. In addition, we find high potential for model weighting in order to reduce uncertainty. These results are in line with other accepted evaluation metrics and studies. The comparison of the different 4-field approaches reveals high correlations for most metrics. The results of the metric-weighted probabilistic density functions of climate change are heterogeneous. We find for different regions and seasons both increases and decreases of uncertainty. The analysis of global study areas is consistent with the regional study areas of the Medeiterrenean.

  7. Predicting Rib Fracture Risk With Whole-Body Finite Element Models: Development and Preliminary Evaluation of a Probabilistic Analytical Framework

    PubMed Central

    Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria

    2012-01-01

    This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122

  8. The Global Tsunami Model (GTM)

    NASA Astrophysics Data System (ADS)

    Løvholt, Finn

    2017-04-01

    The large tsunami disasters of the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15.

  9. How might Model-based Probabilities Extracted from Imperfect Models Guide Rational Decisions: The Case for non-probabilistic odds

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.

    2010-05-01

    This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask whether or not probabilities based on imperfect models can be expected to yield probabilistic odds which are sustainable. Evidence is provided that suggest this is not the case. Even with very good models (good in an Root-Mean-Square sense), the risk of ruin of probabilistic odds is significantly higher than might be expected. Methods for constructing model-based non-probabilistic odds which are sustainable are discussed. The aim here is to be relevant to real world decision support, and so unrealistic assumptions of equal knowledge, equal compute power, or equal access to information are to be avoided. Finally, the use of non-probabilistic odds as a method for communicating deep uncertainty (uncertainty in a probability forecast itself) is discussed in the context of other methods, such as stating one's subjective probability that the models will prove inadequate in each particular instance (that is, the Probability of a "Big Surprise").

  10. A Historical Overview and Contemporary Expansion of Psychological Theories of Determinism, Probabilistic Causality, Indeterminate Free Will, and Moral and Legal Responsibility

    ERIC Educational Resources Information Center

    Wilks, Duffy; Ratheal, Juli D'Ann

    2009-01-01

    The authors provide a historical overview of the development of contemporary theories of counseling and psychology in relation to determinism, probabilistic causality, indeterminate free will, and moral and legal responsibility. They propose a unique model of behavioral causality that incorporates a theory of indeterminate free will, a concept…

  11. Inference for Continuous-Time Probabilistic Programming

    DTIC Science & Technology

    2017-12-01

    Parzen window density estimator to jointly model the inter-camera travel time intervals, locations of exit/entrances, and velocities of ob- jects...asked to travel across the scene multiple times . Even in such a scenario they formed groups and made social interactions, which Fig. 7: Topology of...INFERENCE FOR CONTINUOUS- TIME PROBABILISTIC PROGRAMMING UNIVERSITY OF CALIFORNIA AT RIVERSIDE DECEMBER 2017 FINAL TECHNICAL REPORT APPROVED FOR

  12. A PROBABILISTIC ARSENIC EXPOSURE ASSESSMENT FOR CHILDREN WHO CONTACT CHROMATED COPPER ARSENATE ( CAA )-TREATED PLAYSETS AND DECKS: PART 2 SENSITIVITY AND UNCERTAINTY ANALYSIS

    EPA Science Inventory

    A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two part paper. This Part 2 paper discusses sensitivity and uncertainty analyses conducted to assess the key m...

  13. A unified probabilistic framework for spontaneous facial action modeling and understanding.

    PubMed

    Tong, Yan; Chen, Jixu; Ji, Qiang

    2010-02-01

    Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.

  14. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1986-01-01

    A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.

  15. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    NASA Astrophysics Data System (ADS)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  16. A Probabilistic Model of Social Working Memory for Information Retrieval in Social Interactions.

    PubMed

    Li, Liyuan; Xu, Qianli; Gan, Tian; Tan, Cheston; Lim, Joo-Hwee

    2018-05-01

    Social working memory (SWM) plays an important role in navigating social interactions. Inspired by studies in psychology, neuroscience, cognitive science, and machine learning, we propose a probabilistic model of SWM to mimic human social intelligence for personal information retrieval (IR) in social interactions. First, we establish a semantic hierarchy as social long-term memory to encode personal information. Next, we propose a semantic Bayesian network as the SWM, which integrates the cognitive functions of accessibility and self-regulation. One subgraphical model implements the accessibility function to learn the social consensus about IR-based on social information concept, clustering, social context, and similarity between persons. Beyond accessibility, one more layer is added to simulate the function of self-regulation to perform the personal adaptation to the consensus based on human personality. Two learning algorithms are proposed to train the probabilistic SWM model on a raw dataset of high uncertainty and incompleteness. One is an efficient learning algorithm of Newton's method, and the other is a genetic algorithm. Systematic evaluations show that the proposed SWM model is able to learn human social intelligence effectively and outperforms the baseline Bayesian cognitive model. Toward real-world applications, we implement our model on Google Glass as a wearable assistant for social interaction.

  17. Probabilistic evaluation of SSME structural components

    NASA Astrophysics Data System (ADS)

    Rajagopal, K. R.; Newell, J. F.; Ho, H.

    1991-05-01

    The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.

  18. Generalized probabilistic scale space for image restoration.

    PubMed

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  19. A mediation model to explain decision making under conditions of risk among adolescents: the role of fluid intelligence and probabilistic reasoning.

    PubMed

    Donati, Maria Anna; Panno, Angelo; Chiesi, Francesca; Primi, Caterina

    2014-01-01

    This study tested the mediating role of probabilistic reasoning ability in the relationship between fluid intelligence and advantageous decision making among adolescents in explicit situations of risk--that is, in contexts in which information on the choice options (gains, losses, and probabilities) were explicitly presented at the beginning of the task. Participants were 282 adolescents attending high school (77% males, mean age = 17.3 years). We first measured fluid intelligence and probabilistic reasoning ability. Then, to measure decision making under explicit conditions of risk, participants performed the Game of Dice Task, in which they have to decide among different alternatives that are explicitly linked to a specific amount of gain or loss and have obvious winning probabilities that are stable over time. Analyses showed a significant positive indirect effect of fluid intelligence on advantageous decision making through probabilistic reasoning ability that acted as a mediator. Specifically, fluid intelligence may enhance ability to reason in probabilistic terms, which in turn increases the likelihood of advantageous choices when adolescents are confronted with an explicit decisional context. Findings show that in experimental paradigm settings, adolescents are able to make advantageous decisions using cognitive abilities when faced with decisions under explicit risky conditions. This study suggests that interventions designed to promote probabilistic reasoning, for example by incrementing the mathematical prerequisites necessary to reason in probabilistic terms, may have a positive effect on adolescents' decision-making abilities.

  20. Diffusion tensor tractography of the arcuate fasciculus in patients with brain tumors: Comparison between deterministic and probabilistic models

    PubMed Central

    Li, Zhixi; Peck, Kyung K.; Brennan, Nicole P.; Jenabi, Mehrnaz; Hsu, Meier; Zhang, Zhigang; Holodny, Andrei I.; Young, Robert J.

    2014-01-01

    Purpose The purpose of this study was to compare the deterministic and probabilistic tracking methods of diffusion tensor white matter fiber tractography in patients with brain tumors. Materials and Methods We identified 29 patients with left brain tumors <2 cm from the arcuate fasciculus who underwent pre-operative language fMRI and DTI. The arcuate fasciculus was reconstructed using a deterministic Fiber Assignment by Continuous Tracking (FACT) algorithm and a probabilistic method based on an extended Monte Carlo Random Walk algorithm. Tracking was controlled using two ROIs corresponding to Broca’s and Wernicke’s areas. Tracts in tumoraffected hemispheres were examined for extension between Broca’s and Wernicke’s areas, anterior-posterior length and volume, and compared with the normal contralateral tracts. Results Probabilistic tracts displayed more complete anterior extension to Broca’s area than did FACT tracts on the tumor-affected and normal sides (p < 0.0001). The median length ratio for tumor: normal sides was greater for probabilistic tracts than FACT tracts (p < 0.0001). The median tract volume ratio for tumor: normal sides was also greater for probabilistic tracts than FACT tracts (p = 0.01). Conclusion Probabilistic tractography reconstructs the arcuate fasciculus more completely and performs better through areas of tumor and/or edema. The FACT algorithm tends to underestimate the anterior-most fibers of the arcuate fasciculus, which are crossed by primary motor fibers. PMID:25328583

  1. A Joint Gaussian Process Model for Active Visual Recognition with Expertise Estimation in Crowdsourcing

    PubMed Central

    Long, Chengjiang; Hua, Gang; Kapoor, Ashish

    2015-01-01

    We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892

  2. Probabilistic Prognosis of Non-Planar Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Newman, John A.; Warner, James E.; Leser, William P.; Hochhalter, Jacob D.; Yuan, Fuh-Gwo

    2016-01-01

    Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results.

  3. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Paul T; Yin, Shengjun; Klasky, Hilda B

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less

  4. Initial Probabilistic Evaluation of Reactor Pressure Vessel Fracture with Grizzly and Raven

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Hoffman, William; Sen, Sonat

    2015-10-01

    The Grizzly code is being developed with the goal of creating a general tool that can be applied to study a variety of degradation mechanisms in nuclear power plant components. The first application of Grizzly has been to study fracture in embrittled reactor pressure vessels (RPVs). Grizzly can be used to model the thermal/mechanical response of an RPV under transient conditions that would be observed in a pressurized thermal shock (PTS) scenario. The global response of the vessel provides boundary conditions for local models of the material in the vicinity of a flaw. Fracture domain integrals are computed to obtainmore » stress intensity factors, which can in turn be used to assess whether a fracture would initiate at a pre-existing flaw. These capabilities have been demonstrated previously. A typical RPV is likely to contain a large population of pre-existing flaws introduced during the manufacturing process. This flaw population is characterized stastistically through probability density functions of the flaw distributions. The use of probabilistic techniques is necessary to assess the likelihood of crack initiation during a transient event. This report documents initial work to perform probabilistic analysis of RPV fracture during a PTS event using a combination of the RAVEN risk analysis code and Grizzly. This work is limited in scope, considering only a single flaw with deterministic geometry, but with uncertainty introduced in the parameters that influence fracture toughness. These results are benchmarked against equivalent models run in the FAVOR code. When fully developed, the RAVEN/Grizzly methodology for modeling probabilistic fracture in RPVs will provide a general capability that can be used to consider a wider variety of vessel and flaw conditions that are difficult to consider with current tools. In addition, this will provide access to advanced probabilistic techniques provided by RAVEN, including adaptive sampling and parallelism, which can dramatically decrease run times.« less

  5. Analysis of the French insurance market exposure to floods: a stochastic model combining river overflow and surface runoff

    NASA Astrophysics Data System (ADS)

    Moncoulon, D.; Labat, D.; Ardon, J.; Leblois, E.; Onfroy, T.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.

    2014-09-01

    The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible (but which have not yet occurred) flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2010 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90 % of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff, due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of the CCR (Caisse Centrale de Reassurance) claim database have shown that approximately 45 % of the insured flood losses are located inside the floodplains and 45 % outside. Another 10 % is due to sea surge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: a generation of fictive river flows based on the historical records of the river gauge network and a generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (Macif) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).

  6. A robust multi-kernel change detection framework for detecting leaf beetle defoliation using Landsat 7 ETM+ data

    NASA Astrophysics Data System (ADS)

    Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim

    2016-12-01

    A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.

  7. The composite load spectra project

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H.; Kurth, R. E.

    1990-01-01

    Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.

  8. Assessment of food intake input distributions for use in probabilistic exposure assessments of food additives.

    PubMed

    Gilsenan, M B; Lambe, J; Gibney, M J

    2003-11-01

    A key component of a food chemical exposure assessment using probabilistic analysis is the selection of the most appropriate input distribution to represent exposure variables. The study explored the type of parametric distribution that could be used to model variability in food consumption data likely to be included in a probabilistic exposure assessment of food additives. The goodness-of-fit of a range of continuous distributions to observed data of 22 food categories expressed as average daily intakes among consumers from the North-South Ireland Food Consumption Survey was assessed using the BestFit distribution fitting program. The lognormal distribution was most commonly accepted as a plausible parametric distribution to represent food consumption data when food intakes were expressed as absolute intakes (16/22 foods) and as intakes per kg body weight (18/22 foods). Results from goodness-of-fit tests were accompanied by lognormal probability plots for a number of food categories. The influence on food additive intake of using a lognormal distribution to model food consumption input data was assessed by comparing modelled intake estimates with observed intakes. Results from the present study advise some level of caution about the use of a lognormal distribution as a mode of input for food consumption data in probabilistic food additive exposure assessments and the results highlight the need for further research in this area.

  9. Optimization of the kernel functions in a probabilistic neural network analyzing the local pattern distribution.

    PubMed

    Galleske, I; Castellanos, J

    2002-05-01

    This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.

  10. Order Under Uncertainty: Robust Differential Expression Analysis Using Probabilistic Models for Pseudotime Inference

    PubMed Central

    Campbell, Kieran R.

    2016-01-01

    Single cell gene expression profiling can be used to quantify transcriptional dynamics in temporal processes, such as cell differentiation, using computational methods to label each cell with a ‘pseudotime’ where true time series experimentation is too difficult to perform. However, owing to the high variability in gene expression between individual cells, there is an inherent uncertainty in the precise temporal ordering of the cells. Pre-existing methods for pseudotime estimation have predominantly given point estimates precluding a rigorous analysis of the implications of uncertainty. We use probabilistic modelling techniques to quantify pseudotime uncertainty and propagate this into downstream differential expression analysis. We demonstrate that reliance on a point estimate of pseudotime can lead to inflated false discovery rates and that probabilistic approaches provide greater robustness and measures of the temporal resolution that can be obtained from pseudotime inference. PMID:27870852

  11. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

    PubMed Central

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions. PMID:26089862

  12. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations.

    PubMed

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  13. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  14. Probabilistic graphlet transfer for photo cropping.

    PubMed

    Zhang, Luming; Song, Mingli; Zhao, Qi; Liu, Xiao; Bu, Jiajun; Chen, Chun

    2013-02-01

    As one of the most basic photo manipulation processes, photo cropping is widely used in the printing, graphic design, and photography industries. In this paper, we introduce graphlets (i.e., small connected subgraphs) to represent a photo's aesthetic features, and propose a probabilistic model to transfer aesthetic features from the training photo onto the cropped photo. In particular, by segmenting each photo into a set of regions, we construct a region adjacency graph (RAG) to represent the global aesthetic feature of each photo. Graphlets are then extracted from the RAGs, and these graphlets capture the local aesthetic features of the photos. Finally, we cast photo cropping as a candidate-searching procedure on the basis of a probabilistic model, and infer the parameters of the cropped photos using Gibbs sampling. The proposed method is fully automatic. Subjective evaluations have shown that it is preferred over a number of existing approaches.

  15. Bayesian probabilistic population projections for all countries

    PubMed Central

    Raftery, Adrian E.; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K.

    2012-01-01

    Projections of countries’ future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a Bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using Bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950–1990 are used for estimation, and applied to predict 1990–2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20–64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades. PMID:22908249

  16. bayesPop: Probabilistic Population Projections

    PubMed Central

    Ševčíková, Hana; Raftery, Adrian E.

    2016-01-01

    We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects. PMID:28077933

  17. Advanced Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Technical Exchange Meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis

    2013-09-01

    During FY13, the INL developed an advanced SMR PRA framework which has been described in the report Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Technical Framework Specification, INL/EXT-13-28974 (April 2013). In this framework, the various areas are considered: Probabilistic models to provide information specific to advanced SMRs Representation of specific SMR design issues such as having co-located modules and passive safety features Use of modern open-source and readily available analysis methods Internal and external events resulting in impacts to safety All-hazards considerations Methods to support the identification of design vulnerabilities Mechanistic and probabilistic data needs to support modelingmore » and tools In order to describe this framework more fully and obtain feedback on the proposed approaches, the INL hosted a technical exchange meeting during August 2013. This report describes the outcomes of that meeting.« less

  18. Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weiguang; Fu, Jianyu

    2018-02-01

    Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.

  19. Probabilistic Mesomechanical Fatigue Model

    NASA Technical Reports Server (NTRS)

    Tryon, Robert G.

    1997-01-01

    A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.

  20. Probabilistic/Fracture-Mechanics Model For Service Life

    NASA Technical Reports Server (NTRS)

    Watkins, T., Jr.; Annis, C. G., Jr.

    1991-01-01

    Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.

  1. Probabilistic models of genetic variation in structured populations applied to global human studies.

    PubMed

    Hao, Wei; Song, Minsun; Storey, John D

    2016-03-01

    Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important problem is how to formulate and estimate probabilistic models of observed genotypes that account for complex population structure. The most prominent work on this problem has focused on estimating a model of admixture proportions of ancestral populations for each individual. Here, we instead focus on modeling variation of the genotypes without requiring a higher-level admixture interpretation. We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis can be utilized to estimate a general model that includes the well-known Pritchard-Stephens-Donnelly admixture model as a special case. Noting some drawbacks of this approach, we introduce a new 'logistic factor analysis' framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the Human Genome Diversity Panel and 1000 Genomes Project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions. A Bioconductor R package called lfa is available at http://www.bioconductor.org/packages/release/bioc/html/lfa.html jstorey@princeton.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  2. Abstract probabilistic CNOT gate model based on double encoding: study of the errors and physical realizability

    NASA Astrophysics Data System (ADS)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2015-03-01

    In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.

  3. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  4. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  5. Probabilistic estimation of residential air exchange rates for ...

    EPA Pesticide Factsheets

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  6. A novel visualisation tool for climate services: a case study of temperature extremes and human mortality in Europe

    NASA Astrophysics Data System (ADS)

    Lowe, R.; Ballester, J.; Robine, J.; Herrmann, F. R.; Jupp, T. E.; Stephenson, D.; Rodó, X.

    2013-12-01

    Users of climate information often require probabilistic information on which to base their decisions. However, communicating information contained within a probabilistic forecast presents a challenge. In this paper we demonstrate a novel visualisation technique to display ternary probabilistic forecasts on a map in order to inform decision making. In this method, ternary probabilistic forecasts, which assign probabilities to a set of three outcomes (e.g. low, medium, and high risk), are considered as a point in a triangle of barycentric coordinates. This allows a unique colour to be assigned to each forecast from a continuum of colours defined on the triangle. Colour saturation increases with information gain relative to the reference forecast (i.e. the long term average). This provides additional information to decision makers compared with conventional methods used in seasonal climate forecasting, where one colour is used to represent one forecast category on a forecast map (e.g. red = ';dry'). We use the tool to present climate-related mortality projections across Europe. Temperature and humidity are related to human mortality via location-specific transfer functions, calculated using historical data. Daily mortality data at the NUTS2 level for 16 countries in Europe were obtain from 1998-2005. Transfer functions were calculated for 54 aggregations in Europe, defined using criteria related to population and climatological similarities. Aggregations are restricted to fall within political boundaries to avoid problems related to varying adaptation policies between countries. A statistical model is fit to cold and warm tails to estimate future mortality using forecast temperatures, in a Bayesian probabilistic framework. Using predefined categories of temperature-related mortality risk, we present maps of probabilistic projections for human mortality at seasonal to decadal time scales. We demonstrate the information gained from using this technique compared to more traditional methods to display ternary probabilistic forecasts. This technique allows decision makers to identify areas where the model predicts with certainty area-specific heat waves or cold snaps, in order to effectively target resources to those areas most at risk, for a given season or year. It is hoped that this visualisation tool will facilitate the interpretation of the probabilistic forecasts not only for public health decision makers but also within a multi-sectoral climate service framework.

  7. Behavioral and Temporal Pattern Detection Within Financial Data With Hidden Information

    DTIC Science & Technology

    2012-02-01

    probabilistic pattern detector to monitor the pattern. 15. SUBJECT TERMS Runtime verification, Hidden data, Hidden Markov models, Formal specifications...sequences in many other fields besides financial systems [L, TV, LC, LZ ]. Rather, the technique suggested in this paper is positioned as a hybrid...operation of the pattern detector . Section 7 describes the operation of the probabilistic pattern-matching monitor, and section 8 describes three

  8. Conjoint-measurement framework for the study of probabilistic information processing.

    NASA Technical Reports Server (NTRS)

    Wallsten, T. S.

    1972-01-01

    The theory of conjoint measurement described by Krantz et al. (1971) is shown to indicate how a descriptive model of human processing of probabilistic information built around Bayes' rule is to be tested and how it is to be used to obtain subjective scale values. Specific relationships concerning these scale values are shown to emerge, and the theoretical prospects resulting from this development are discussed.

  9. Maritime Threat Detection Using Probabilistic Graphical Models

    DTIC Science & Technology

    2012-01-01

    CRF, unlike an HMM, can represent local features, and does not require feature concatenation. MLNs For MLNs, we used Alchemy ( Alchemy 2011), an...open source statistical relational learning and probabilistic inferencing package. Alchemy supports generative and discriminative weight learning, and...that Alchemy creates a new formula for every possible combination of the values for a1 and a2 that fit the type specified in their predicate

  10. Terminal Model Of Newtonian Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1994-01-01

    Paper presents study of theory of Newtonian dynamics of terminal attractors and repellers, focusing on issues of reversibility vs. irreversibility and deterministic evolution vs. probabilistic or chaotic evolution of dynamic systems. Theory developed called "terminal dynamics" emphasizes difference between it and classical Newtonian dynamics. Also holds promise for explaining irreversibility, unpredictability, probabilistic behavior, and chaos in turbulent flows, in thermodynamic phenomena, and in other dynamic phenomena and systems.

  11. Review of methods for developing regional probabilistic risk assessments, part 2: modeling invasive plant, insect, and pathogen species

    Treesearch

    P. B. Woodbury; D. A. Weinstein

    2010-01-01

    We reviewed probabilistic regional risk assessment methodologies to identify the methods that are currently in use and are capable of estimating threats to ecosystems from fire and fuels, invasive species, and their interactions with stressors. In a companion chapter, we highlight methods useful for evaluating risks from fire. In this chapter, we highlight methods...

  12. A probabilistic approach to aircraft design emphasizing stability and control uncertainties

    NASA Astrophysics Data System (ADS)

    Delaurentis, Daniel Andrew

    In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.

  13. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less computation power. The authors have used this approach for risk assessment towards identification of effectiveness-profitability of risk mitigation measures, using optimization model for resource allocation. Based on the error-computation trade-off, 62-earthquake scenarios are chosen to be used for this purpose.

  14. Using probabilistic modeling to evaluate human exposure to organotin in drinking water transported by polyvinyl chloride pipe.

    PubMed

    Fristachi, Anthony; Xu, Ying; Rice, Glenn; Impellitteri, Christopher A; Carlson-Lynch, Heather; Little, John C

    2009-11-01

    The leaching of organotin (OT) heat stabilizers from polyvinyl chloride (PVC) pipes used in residential drinking water systems may affect the quality of drinking water. These OTs, principally mono- and di-substituted species of butyltins and methyltins, are a potential health concern because they belong to a broad class of compounds that may be immune, nervous, and reproductive system toxicants. In this article, we develop probability distributions of U.S. population exposures to mixtures of OTs encountered in drinking water transported by PVC pipes. We employed a family of mathematical models to estimate OT leaching rates from PVC pipe as a function of both surface area and time. We then integrated the distribution of estimated leaching rates into an exposure model that estimated the probability distribution of OT concentrations in tap waters and the resulting potential human OT exposures via tap water consumption. Our study results suggest that human OT exposures through tap water consumption are likely to be considerably lower than the World Health Organization (WHO) "safe" long-term concentration in drinking water (150 microg/L) for dibutyltin (DBT)--the most toxic of the OT considered in this article. The 90th percentile average daily dose (ADD) estimate of 0.034 +/- 2.92 x 10(-4)microg/kg day is approximately 120 times lower than the WHO-based ADD for DBT (4.2 microg/kg day).

  15. Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)

    NASA Astrophysics Data System (ADS)

    OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.

    2016-02-01

    Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.

  16. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  17. Develop Probabilistic Tsunami Design Maps for ASCE 7

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Thio, H. K.; Chock, G.; Titov, V. V.

    2014-12-01

    A national standard for engineering design for tsunami effects has not existed before and this significant risk is mostly ignored in engineering design. The American Society of Civil Engineers (ASCE) 7 Tsunami Loads and Effects Subcommittee is completing a chapter for the 2016 edition of ASCE/SEI 7 Standard. Chapter 6, Tsunami Loads and Effects, would become the first national tsunami design provisions. These provisions will apply to essential facilities and critical infrastructure. This standard for tsunami loads and effects will apply to designs as part of the tsunami preparedness. The provisions will have significance as the post-tsunami recovery tool, to plan and evaluate for reconstruction. Maps of 2,500-year probabilistic tsunami inundation for Alaska, Washington, Oregon, California, and Hawaii need to be developed for use with the ASCE design provisions. These new tsunami design zone maps will define the coastal zones where structures of greater importance would be designed for tsunami resistance and community resilience. The NOAA Center for Tsunami Research (NCTR) has developed 75 tsunami inundation models as part of the operational tsunami model forecast capability for the U.S. coastline. NCTR, UW, and URS are collaborating with ASCE to develop the 2,500-year tsunami design maps for the Pacific states using these tsunami models. This ensures the probabilistic criteria are established in ASCE's tsunami design maps. URS established a Probabilistic Tsunami Hazard Assessment approach consisting of a large amount of tsunami scenarios that include both epistemic uncertainty and aleatory variability (Thio et al., 2010). Their study provides 2,500-year offshore tsunami heights at the 100-m water depth, along with the disaggregated earthquake sources. NOAA's tsunami models are used to identify a group of sources that produce these 2,500-year tsunami heights. The tsunami inundation limits and runup heights derived from these sources establish the tsunami design map for the study site. ASCE's Energy Grad Line Analysis then uses these modeling constraints to derive hydrodynamic forces for structures within the tsunami design zone. The probabilistic tsunami design maps will be validated by comparison to state inundation maps under the coordination of the National Tsunami Hazard Mitigation Program.

  18. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  19. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  20. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

Top