Sample records for model-based probabilistic inversion

  1. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  2. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  3. Two-dimensional probabilistic inversion of plane-wave electromagnetic data: methodology, model constraints and joint inversion with electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.

    2014-03-01

    Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

  4. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  5. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  6. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  7. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  8. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  9. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  10. Variational Bayesian Inversion of Quasi-Localized Seismic Attributes for the Spatial Distribution of Geological Facies

    NASA Astrophysics Data System (ADS)

    Nawaz, Muhammad Atif; Curtis, Andrew

    2018-04-01

    We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.

  11. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  12. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  13. Expert judgement and uncertainty quantification for climate change

    NASA Astrophysics Data System (ADS)

    Oppenheimer, Michael; Little, Christopher M.; Cooke, Roger M.

    2016-05-01

    Expert judgement is an unavoidable element of the process-based numerical models used for climate change projections, and the statistical approaches used to characterize uncertainty across model ensembles. Here, we highlight the need for formalized approaches to unifying numerical modelling with expert judgement in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion. As an example, we use probabilistic inversion, a well-established technique used in many other applications outside of climate change, to fuse two recent analyses of twenty-first century Antarctic ice loss. Probabilistic inversion is but one of many possible approaches to formalizing the role of expert judgement, and the Antarctic ice sheet is only one possible climate-related application. We recommend indicators or signposts that characterize successful science-based uncertainty quantification.

  14. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  15. Probabilistic 3-D time-lapse inversion of magnetotelluric data: application to an enhanced geothermal system

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, M.; Linde, N.; Peacock, J.; Zyserman, F. I.; Kalscheuer, T.; Thiel, S.

    2015-12-01

    Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved. We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.

  16. Probabilistic 3-D time-lapse inversion of magnetotelluric data: Application to an enhanced geothermal system

    USGS Publications Warehouse

    Rosas-Carbajal, Marina; Linde, Nicolas; Peacock, Jared R.; Zyserman, F. I.; Kalscheuer, Thomas; Thiel, Stephan

    2015-01-01

    Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved.We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.

  17. Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Jacques, Diederik; Linde, Niklas

    2018-01-01

    Probabilistic inversion within a multiple-point statistics framework is often computationally prohibitive for high-dimensional problems. To partly address this, we introduce and evaluate a new training-image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2-D and 3-D unconditional realizations. A key characteristic of our SGAN is that it defines a (very) low-dimensional parameterization, thereby allowing for efficient probabilistic inversion using state-of-the-art Markov chain Monte Carlo (MCMC) methods. In addition, available direct conditioning data can be incorporated within the inversion. Several 2-D and 3-D categorical TIs are first used to analyze the performance of our SGAN for unconditional geostatistical simulation. Training our deep network can take several hours. After training, realizations containing a few millions of pixels/voxels can be produced in a matter of seconds. This makes it especially useful for simulating many thousands of realizations (e.g., for MCMC inversion) as the relative cost of the training per realization diminishes with the considered number of realizations. Synthetic inversion case studies involving 2-D steady state flow and 3-D transient hydraulic tomography with and without direct conditioning data are used to illustrate the effectiveness of our proposed SGAN-based inversion. For the 2-D case, the inversion rapidly explores the posterior model distribution. For the 3-D case, the inversion recovers model realizations that fit the data close to the target level and visually resemble the true model well.

  18. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.

  19. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  20. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  1. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  2. A probabilistic seismic model for the European Arctic

    NASA Astrophysics Data System (ADS)

    Hauser, Juerg; Dyer, Kathleen M.; Pasyanos, Michael E.; Bungum, Hilmar; Faleide, Jan I.; Clark, Stephen A.; Schweitzer, Johannes

    2011-01-01

    The development of three-dimensional seismic models for the crust and upper mantle has traditionally focused on finding one model that provides the best fit to the data while observing some regularization constraints. In contrast to this, the inversion employed here fits the data in a probabilistic sense and thus provides a quantitative measure of model uncertainty. Our probabilistic model is based on two sources of information: (1) prior information, which is independent from the data, and (2) different geophysical data sets, including thickness constraints, velocity profiles, gravity data, surface wave group velocities, and regional body wave traveltimes. We use a Markov chain Monte Carlo (MCMC) algorithm to sample models from the prior distribution, the set of plausible models, and test them against the data to generate the posterior distribution, the ensemble of models that fit the data with assigned uncertainties. While being computationally more expensive, such a probabilistic inversion provides a more complete picture of solution space and allows us to combine various data sets. The complex geology of the European Arctic, encompassing oceanic crust, continental shelf regions, rift basins and old cratonic crust, as well as the nonuniform coverage of the region by data with varying degrees of uncertainty, makes it a challenging setting for any imaging technique and, therefore, an ideal environment for demonstrating the practical advantages of a probabilistic approach. Maps of depth to basement and depth to Moho derived from the posterior distribution are in good agreement with previously published maps and interpretations of the regional tectonic setting. The predicted uncertainties, which are as important as the absolute values, correlate well with the variations in data coverage and quality in the region. A practical advantage of our probabilistic model is that it can provide estimates for the uncertainties of observables due to model uncertainties. We will demonstrate how this can be used for the formulation of earthquake location algorithms that take model uncertainties into account when estimating location uncertainties.

  3. A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2010-01-01

    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data.

  4. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  5. Probabilistic Geoacoustic Inversion in Complex Environments

    DTIC Science & Technology

    2015-09-30

    Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is

  6. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  7. Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.

    2017-12-01

    It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.

  8. Probabilistic Fatigue Damage Prognosis Using a Surrogate Model Trained Via 3D Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo

    2015-01-01

    Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.

  9. An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.

    2011-12-01

    The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.

  10. Probabilistic fatigue life prediction of metallic and composite materials

    NASA Astrophysics Data System (ADS)

    Xiang, Yibing

    Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.

  11. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  12. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  13. Location error uncertainties - an advanced using of probabilistic inverse theory

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2016-04-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.

  14. Probabilistic inversion with graph cuts: Application to the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Pirot, Guillaume; Linde, Niklas; Mariethoz, Grégoire; Bradford, John H.

    2017-02-01

    Inversion methods that build on multiple-point statistics tools offer the possibility to obtain model realizations that are not only in agreement with field data, but also with conceptual geological models that are represented by training images. A recent inversion approach based on patch-based geostatistical resimulation using graph cuts outperforms state-of-the-art multiple-point statistics methods when applied to synthetic inversion examples featuring continuous and discontinuous property fields. Applications of multiple-point statistics tools to field data are challenging due to inevitable discrepancies between actual subsurface structure and the assumptions made in deriving the training image. We introduce several amendments to the original graph cut inversion algorithm and present a first-ever field application by addressing porosity estimation at the Boise Hydrogeophysical Research Site, Boise, Idaho. We consider both a classical multi-Gaussian and an outcrop-based prior model (training image) that are in agreement with available porosity data. When conditioning to available crosshole ground-penetrating radar data using Markov chain Monte Carlo, we find that the posterior realizations honor overall both the characteristics of the prior models and the geophysical data. The porosity field is inverted jointly with the measurement error and the petrophysical parameters that link dielectric permittivity to porosity. Even though the multi-Gaussian prior model leads to posterior realizations with higher likelihoods, the outcrop-based prior model shows better convergence. In addition, it offers geologically more realistic posterior realizations and it better preserves the full porosity range of the prior.

  15. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  16. Probabilistic Prognosis of Non-Planar Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Newman, John A.; Warner, James E.; Leser, William P.; Hochhalter, Jacob D.; Yuan, Fuh-Gwo

    2016-01-01

    Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results.

  17. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  18. Bayesian inversion of data from effusive volcanic eruptions using physics-based models: Application to Mount St. Helens 2004--2008

    USGS Publications Warehouse

    Anderson, Kyle; Segall, Paul

    2013-01-01

    Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.

  19. A probabilistic process model for pelagic marine ecosystems informed by Bayesian inverse analysis

    EPA Science Inventory

    Marine ecosystems are complex systems with multiple pathways that produce feedback cycles, which may lead to unanticipated effects. Models abstract this complexity and allow us to predict, understand, and hypothesize. In ecological models, however, the paucity of empirical data...

  20. Acoustic emission based damage localization in composites structures using Bayesian identification

    NASA Astrophysics Data System (ADS)

    Kundu, A.; Eaton, M. J.; Al-Jumali, S.; Sikdar, S.; Pullin, R.

    2017-05-01

    Acoustic emission based damage detection in composite structures is based on detection of ultra high frequency packets of acoustic waves emitted from damage sources (such as fibre breakage, fatigue fracture, amongst others) with a network of distributed sensors. This non-destructive monitoring scheme requires solving an inverse problem where the measured signals are linked back to the location of the source. This in turn enables rapid deployment of mitigative measures. The presence of significant amount of uncertainty associated with the operating conditions and measurements makes the problem of damage identification quite challenging. The uncertainties stem from the fact that the measured signals are affected by the irregular geometries, manufacturing imprecision, imperfect boundary conditions, existing damages/structural degradation, amongst others. This work aims to tackle these uncertainties within a framework of automated probabilistic damage detection. The method trains a probabilistic model of the parametrized input and output model of the acoustic emission system with experimental data to give probabilistic descriptors of damage locations. A response surface modelling the acoustic emission as a function of parametrized damage signals collected from sensors would be calibrated with a training dataset using Bayesian inference. This is used to deduce damage locations in the online monitoring phase. During online monitoring, the spatially correlated time data is utilized in conjunction with the calibrated acoustic emissions model to infer the probabilistic description of the acoustic emission source within a hierarchical Bayesian inference framework. The methodology is tested on a composite structure consisting of carbon fibre panel with stiffeners and damage source behaviour has been experimentally simulated using standard H-N sources. The methodology presented in this study would be applicable in the current form to structural damage detection under varying operational loads and would be investigated in future studies.

  1. Inference of multi-Gaussian property fields by probabilistic inversion of crosshole ground penetrating radar data using an improved dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Hunziker, Jürg; Laloy, Eric; Linde, Niklas

    2016-04-01

    Deterministic inversion procedures can often explain field data, but they only deliver one final subsurface model that depends on the initial model and regularization constraints. This leads to poor insights about the uncertainties associated with the inferred model properties. In contrast, probabilistic inversions can provide an ensemble of model realizations that accurately span the range of possible models that honor the available calibration data and prior information allowing a quantitative description of model uncertainties. We reconsider the problem of inferring the dielectric permittivity (directly related to radar velocity) structure of the subsurface by inversion of first-arrival travel times from crosshole ground penetrating radar (GPR) measurements. We rely on the DREAM_(ZS) algorithm that is a state-of-the-art Markov chain Monte Carlo (MCMC) algorithm. Such algorithms need several orders of magnitude more forward simulations than deterministic algorithms and often become infeasible in high parameter dimensions. To enable high-resolution imaging with MCMC, we use a recently proposed dimensionality reduction approach that allows reproducing 2D multi-Gaussian fields with far fewer parameters than a classical grid discretization. We consider herein a dimensionality reduction from 5000 to 257 unknowns. The first 250 parameters correspond to a spectral representation of random and uncorrelated spatial fluctuations while the remaining seven geostatistical parameters are (1) the standard deviation of the data error, (2) the mean and (3) the variance of the relative electric permittivity, (4) the integral scale along the major axis of anisotropy, (5) the anisotropy angle, (6) the ratio of the integral scale along the minor axis of anisotropy to the integral scale along the major axis of anisotropy and (7) the shape parameter of the Matérn function. The latter essentially defines the type of covariance function (e.g., exponential, Whittle, Gaussian). We present an improved formulation of the dimensionality reduction, and numerically show how it reduces artifacts in the generated models and provides better posterior estimation of the subsurface geostatistical structure. We next show that the results of the method compare very favorably against previous deterministic and stochastic inversion results obtained at the South Oyster Bacterial Transport Site in Virginia, USA. The long-term goal of this work is to enable MCMC-based full waveform inversion of crosshole GPR data.

  2. Probabilistic inversion of AVO seismic data for reservoir properties and related uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zunino, Andrea; Mosegaard, Klaus

    2017-04-01

    Sought-after reservoir properties of interest are linked only indirectly to the observable geophysical data which are recorded at the earth's surface. In this framework, seismic data represent one of the most reliable tool to study the structure and properties of the subsurface for natural resources. Nonetheless, seismic analysis is not an end in itself, as physical properties such as porosity are often of more interest for reservoir characterization. As such, inference of those properties implies taking into account also rock physics models linking porosity and other physical properties to elastic parameters. In the framework of seismic reflection data, we address this challenge for a reservoir target zone employing a probabilistic method characterized by a multi-step complex nonlinear forward modeling that combines: 1) a rock physics model with 2) the solution of full Zoeppritz equations and 3) a convolutional seismic forward modeling. The target property of this work is porosity, which is inferred using a Monte Carlo approach where porosity models, i.e., solutions to the inverse problem, are directly sampled from the posterior distribution. From a theoretical point of view, the Monte Carlo strategy can be particularly useful in the presence of nonlinear forward models, which is often the case when employing sophisticated rock physics models and full Zoeppritz equations and to estimate related uncertainty. However, the resulting computational challenge is huge. We propose to alleviate this computational burden by assuming some smoothness of the subsurface parameters and consequently parameterizing the model in terms of spline bases. This allows us a certain flexibility in that the number of spline bases and hence the resolution in each spatial direction can be controlled. The method is tested on a 3-D synthetic case and on a 2-D real data set.

  3. High-resolution surface wave tomography of the European crust and uppermost mantle from ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Lu, Yang; Stehly, Laurent; Paul, Anne; AlpArray Working Group

    2018-05-01

    Taking advantage of the large number of seismic stations installed in Europe, in particular in the greater Alpine region with the AlpArray experiment, we derive a new high-resolution 3-D shear-wave velocity model of the European crust and uppermost mantle from ambient noise tomography. The correlation of up to four years of continuous vertical-component seismic recordings from 1293 broadband stations (10° W-35° E, 30° N-75° N) provides Rayleigh wave group velocity dispersion data in the period band 5-150 s at more than 0.8 million virtual source-receiver pairs. Two-dimensional Rayleigh wave group velocity maps are estimated using adaptive parameterization to accommodate the strong heterogeneity of path coverage. A probabilistic 3-D shear-wave velocity model, including probability densities for the depth of layer boundaries and S-wave velocity values, is obtained by non-linear Bayesian inversion. A weighted average of the probabilistic model is then used as starting model for the linear inversion step, providing the final Vs model. The resulting S-wave velocity model and Moho depth are validated by comparison with previous geophysical studies. Although surface-wave tomography is weakly sensitive to layer boundaries, vertical cross-sections through our Vs model and the associated probability of presence of interfaces display striking similarities with reference controlled-source (CSS) and receiver-function sections across the Alpine belt. Our model even provides new structural information such as a ˜8 km Moho jump along the CSS ECORS-CROP profile that was not imaged by reflection data due to poor penetration across a heterogeneous upper crust. Our probabilistic and final shear wave velocity models have the potential to become new reference models of the European crust, both for crustal structure probing and geophysical studies including waveform modeling or full waveform inversion.

  4. Expert opinion on landslide susceptibility elicted by probabilistic inversion from scenario rankings

    NASA Astrophysics Data System (ADS)

    Lee, Katy; Dashwood, Claire; Lark, Murray

    2016-04-01

    For many natural hazards the opinion of experts, with experience in assessing susceptibility under different circumstances, is a valuable source of information on which to base risk assessments. This is particularly important where incomplete process understanding, and limited data, limit the scope to predict susceptibility by mechanistic or statistical modelling. The expert has a tacit model of a system, based on their understanding of processes and their field experience. This model may vary in quality, depending on the experience of the expert. There is considerable interest in how one may elicit expert understanding by a process which is transparent and robust, to provide a basis for decision support. One approach is to provide experts with a set of scenarios, and then to ask them to rank small overlapping subsets of these with respect to susceptibility. Methods of probabilistic inversion have been used to compute susceptibility scores for each scenario, implicit in the expert ranking. It is also possible to model these scores as functions of measurable properties of the scenarios. This approach has been used to assess susceptibility of animal populations to invasive diseases, to assess risk to vulnerable marine environments and to assess the risk in hypothetical novel technologies for food production. We will present the results of a study in which a group of geologists with varying degrees of expertise in assessing landslide hazards were asked to rank sets of hypothetical simplified scenarios with respect to land slide susceptibility. We examine the consistency of their rankings and the importance of different properties of the scenarios in the tacit susceptibility model that their rankings implied. Our results suggest that this is a promising approach to the problem of how experts can communicate their tacit model of uncertain systems to those who want to make use of their expertise.

  5. Delineating chalk sand distribution of Ekofisk formation using probabilistic neural network (PNN) and stepwise regression (SWR): Case study Danish North Sea field

    NASA Astrophysics Data System (ADS)

    Haris, A.; Nafian, M.; Riyanto, A.

    2017-07-01

    Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.

  6. Appraisal of geodynamic inversion results: a data mining approach

    NASA Astrophysics Data System (ADS)

    Baumann, T. S.

    2016-11-01

    Bayesian sampling based inversions require many thousands or even millions of forward models, depending on how nonlinear or non-unique the inverse problem is, and how many unknowns are involved. The result of such a probabilistic inversion is not a single `best-fit' model, but rather a probability distribution that is represented by the entire model ensemble. Often, a geophysical inverse problem is non-unique, and the corresponding posterior distribution is multimodal, meaning that the distribution consists of clusters with similar models that represent the observations equally well. In these cases, we would like to visualize the characteristic model properties within each of these clusters of models. However, even for a moderate number of inversion parameters, a manual appraisal for a large number of models is not feasible. This poses the question whether it is possible to extract end-member models that represent each of the best-fit regions including their uncertainties. Here, I show how a machine learning tool can be used to characterize end-member models, including their uncertainties, from a complete model ensemble that represents a posterior probability distribution. The model ensemble used here results from a nonlinear geodynamic inverse problem, where rheological properties of the lithosphere are constrained from multiple geophysical observations. It is demonstrated that by taking vertical cross-sections through the effective viscosity structure of each of the models, the entire model ensemble can be classified into four end-member model categories that have a similar effective viscosity structure. These classification results are helpful to explore the non-uniqueness of the inverse problem and can be used to compute representative data fits for each of the end-member models. Conversely, these insights also reveal how new observational constraints could reduce the non-uniqueness. The method is not limited to geodynamic applications and a generalized MATLAB code is provided to perform the appraisal analysis.

  7. Goal-directed decision making as probabilistic inference: A computational framework and potential neural correlates

    PubMed Central

    Solway, A.; Botvinick, M.

    2013-01-01

    Recent work has given rise to the view that reward-based decision making is governed by two key controllers: a habit system, which stores stimulus-response associations shaped by past reward, and a goal-oriented system that selects actions based on their anticipated outcomes. The current literature provides a rich body of computational theory addressing habit formation, centering on temporal-difference learning mechanisms. Less progress has been made toward formalizing the processes involved in goal-directed decision making. We draw on recent work in cognitive neuroscience, animal conditioning, cognitive and developmental psychology and machine learning, to outline a new theory of goal-directed decision making. Our basic proposal is that the brain, within an identifiable network of cortical and subcortical structures, implements a probabilistic generative model of reward, and that goal-directed decision making is effected through Bayesian inversion of this model. We present a set of simulations implementing the account, which address benchmark behavioral and neuroscientific findings, and which give rise to a set of testable predictions. We also discuss the relationship between the proposed framework and other models of decision making, including recent models of perceptual choice, to which our theory bears a direct connection. PMID:22229491

  8. Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring

    NASA Astrophysics Data System (ADS)

    Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.

    2013-12-01

    Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.

  9. Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization

    PubMed Central

    Melchor, Juan; Muñoz, Rafael; Rus, Guillermo

    2017-01-01

    Torsion mechanical waves have the capability to characterize shear stiffness moduli of soft tissue. Under this hypothesis, a computational methodology is proposed to design and optimize a piezoelectrics-based transmitter and receiver to generate and measure the response of torsional ultrasonic waves. The procedure employed is divided into two steps: (i) a finite element method (FEM) is developed to obtain a transmitted and received waveform as well as a resonance frequency of a previous geometry validated with a semi-analytical simplified model and (ii) a probabilistic optimality criteria of the design based on inverse problem from the estimation of robust probability of detection (RPOD) to maximize the detection of the pathology defined in terms of changes of shear stiffness. This study collects different options of design in two separated models, in transmission and contact, respectively. The main contribution of this work describes a framework to establish such as forward, inverse and optimization procedures to choose a set of appropriate parameters of a transducer. This methodological framework may be generalizable for other different applications. PMID:28617353

  10. System for uncollimated digital radiography

    DOEpatents

    Wang, Han; Hall, James M.; McCarrick, James F.; Tang, Vincent

    2015-08-11

    The inversion algorithm based on the maximum entropy method (MEM) removes unwanted effects in high energy imaging resulting from an uncollimated source interacting with a finitely thick scintillator. The algorithm takes as input the image from the thick scintillator (TS) and the radiography setup geometry. The algorithm then outputs a restored image which appears as if taken with an infinitesimally thin scintillator (ITS). Inversion is accomplished by numerically generating a probabilistic model relating the ITS image to the TS image and then inverting this model on the TS image through MEM. This reconstruction technique can reduce the exposure time or the required source intensity without undesirable object blurring on the image by allowing the use of both thicker scintillators with higher efficiencies and closer source-to-detector distances to maximize incident radiation flux. The technique is applicable in radiographic applications including fast neutron, high-energy gamma and x-ray radiography using thick scintillators.

  11. A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.

    PubMed

    Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C

    2018-05-03

    The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.

  12. Applications of Bayesian spectrum representation in acoustics

    NASA Astrophysics Data System (ADS)

    Botts, Jonathan M.

    This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v

  13. Generating probabilistic Boolean networks from a prescribed transition probability matrix.

    PubMed

    Ching, W-K; Chen, X; Tsing, N-K

    2009-11-01

    Probabilistic Boolean networks (PBNs) have received much attention in modeling genetic regulatory networks. A PBN can be regarded as a Markov chain process and is characterised by a transition probability matrix. In this study, the authors propose efficient algorithms for constructing a PBN when its transition probability matrix is given. The complexities of the algorithms are also analysed. This is an interesting inverse problem in network inference using steady-state data. The problem is important as most microarray data sets are assumed to be obtained from sampling the steady-state.

  14. Uncertainty in training image-based inversion of hydraulic head data constrained to ERT data: Workflow and case study

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Caers, Jef

    2015-07-01

    In inverse problems, investigating uncertainty in the posterior distribution of model parameters is as important as matching data. In recent years, most efforts have focused on techniques to sample the posterior distribution with reasonable computational costs. Within a Bayesian context, this posterior depends on the prior distribution. However, most of the studies ignore modeling the prior with realistic geological uncertainty. In this paper, we propose a workflow inspired by a Popper-Bayes philosophy that data should first be used to falsify models, then only be considered for matching. We propose a workflow consisting of three steps: (1) in defining the prior, we interpret multiple alternative geological scenarios from literature (architecture of facies) and site-specific data (proportions of facies). Prior spatial uncertainty is modeled using multiple-point geostatistics, where each scenario is defined using a training image. (2) We validate these prior geological scenarios by simulating electrical resistivity tomography (ERT) data on realizations of each scenario and comparing them to field ERT in a lower dimensional space. In this second step, the idea is to probabilistically falsify scenarios with ERT, meaning that scenarios which are incompatible receive an updated probability of zero while compatible scenarios receive a nonzero updated belief. (3) We constrain the hydrogeological model with hydraulic head and ERT using a stochastic search method. The workflow is applied to a synthetic and a field case studies in an alluvial aquifer. This study highlights the importance of considering and estimating prior uncertainty (without data) through a process of probabilistic falsification.

  15. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  16. Great Balls of Fire: A probabilistic approach to quantify the hazard related to ballistics - A case study at La Fossa volcano, Vulcano Island, Italy

    NASA Astrophysics Data System (ADS)

    Biass, Sébastien; Falcone, Jean-Luc; Bonadonna, Costanza; Di Traglia, Federico; Pistolesi, Marco; Rosi, Mauro; Lestuzzi, Pierino

    2016-10-01

    We present a probabilistic approach to quantify the hazard posed by volcanic ballistic projectiles (VBP) and their potential impact on the built environment. A model named Great Balls of Fire (GBF) is introduced to describe ballistic trajectories of VBPs accounting for a variable drag coefficient and topography. It relies on input parameters easily identifiable in the field and is designed to model large numbers of VBPs stochastically. Associated functions come with the GBF code to post-process model outputs into a comprehensive probabilistic hazard assessment for VBP impacts. Outcomes include probability maps to exceed given thresholds of kinetic energies at impact, hazard curves and probabilistic isoenergy maps. Probabilities are calculated either on equally-sized pixels or zones of interest. The approach is calibrated, validated and applied to La Fossa volcano, Vulcano Island (Italy). We constructed a generic eruption scenario based on stratigraphic studies and numerical inversions of the 1888-1890 long-lasting Vulcanian cycle of La Fossa. Results suggest a ~ 10- 2% probability of occurrence of VBP impacts with kinetic energies ≤ 104 J at the touristic locality of Porto. In parallel, the vulnerability to roof perforation was estimated by combining field observations and published literature, allowing for a first estimate of the potential impact of VBPs during future Vulcanian eruptions. Results indicate a high physical vulnerability to the VBP hazard, and, consequently, half of the building stock having a ≥ 2.5 × 10- 3% probability of roof perforation.

  17. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  18. Improved source inversion from joint measurements of translational and rotational ground motions

    NASA Astrophysics Data System (ADS)

    Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.

    2017-12-01

    Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.

  19. Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.

    PubMed

    Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen

    2008-02-01

    A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.

  20. Distributed micro-releases of bioterror pathogens : threat characterizations and epidemiology from uncertain patient observables.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Marzouk, Youssef M.; Adams, Brian M.

    2008-10-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limitmore » ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.« less

  1. The need and approach for characterization - U.S. air force perspectives on materials state awareness

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Lindgren, Eric A.

    2018-04-01

    This paper expands on the objective and motivation for NDE-based characterization and includes a discussion of the current approach using model-assisted inversion being pursued within the Air Force Research Laboratory (AFRL). This includes a discussion of the multiple model-based methods that can be used, including physics-based models, deep machine learning, and heuristic approaches. The benefits and drawbacks of each method is reviewed and the potential to integrate multiple methods is discussed. Initial successes are included to highlight the ability to obtain quantitative values of damage. Additional steps remaining to realize this capability with statistical metrics of accuracy are discussed, and how these results can be used to enable probabilistic life management are addressed. The outcome of this initiative will realize the long-term desired capability of NDE methods to provide quantitative characterization to accelerate certification of new materials and enhance life management of engineered systems.

  2. Probabilistic source mechanism estimation based on body-wave waveforms through shift and stack algorithm

    NASA Astrophysics Data System (ADS)

    Massin, F.; Malcolm, A. E.

    2017-12-01

    Knowing earthquake source mechanisms gives valuable information for earthquake response planning and hazard mitigation. Earthquake source mechanisms can be analyzed using long period waveform inversion (for moderate size sources with sufficient signal to noise ratio) and body-wave first motion polarity or amplitude ratio inversion (for micro-earthquakes with sufficient data coverage). A robust approach that gives both source mechanisms and their associated probabilities across all source scales would greatly simplify the determination of source mechanisms and allow for more consistent interpretations of the results. Following previous work on shift and stack approaches, we develop such a probabilistic source mechanism analysis, using waveforms, which does not require polarity picking. For a given source mechanism, the first period of the observed body-waves is selected for all stations, multiplied by their corresponding theoretical polarity and stacked together. (The first period is found from a manually picked travel time by measuring the central period where the signal power is concentrated, using the second moment of the power spectral density function.) As in other shift and stack approaches, our method is not based on the optimization of an objective function through an inversion. Instead, the power of the polarity-corrected stack is a proxy for the likelihood of the trial source mechanism, with the most powerful stack corresponding to the most likely source mechanism. Using synthetic data, we test our method for robustness to the data coverage, coverage gap, signal to noise ratio, travel-time picking errors and non-double couple component. We then present results for field data in a volcano-tectonic context. Our results are reliable when constrained by 15 body-wavelets, with gap below 150 degrees, signal to noise ratio over 1 and arrival time error below a fifth of the period (0.2T) of the body-wave. We demonstrate that the source scanning approach for source mechanism analysis has similar advantages to waveform inversion (full waveform data, no manual intervention, probabilistic approach) and similar applicability to polarity inversion (any source size, any instrument type).

  3. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  4. Paleoproterozoic Collisional Structures in the Hudson Bay Lithosphere Constrained by Multi-Observable Probabilistic Inversion

    NASA Astrophysics Data System (ADS)

    Darbyshire, F. A.; Afonso, J. C.; Porritt, R. W.

    2015-12-01

    The Paleozoic Hudson Bay intracratonic basin conceals a Paleoproterozoic Himalayan-scale continental collision, the Trans-Hudson Orogen (THO), which marks an important milestone in the assembly of the Canadian Shield. The geometry of the THO is complex due to the double-indentor geometry of the collision between the Archean Superior and Western Churchill cratons. Seismic observations at regional scale show a thick, seismically fast lithospheric keel beneath the entire region; an intriguing feature of recent models is a 'curtain' of slightly lower wavespeeds trending NE-SW beneath the Bay, which may represent the remnants of more juvenile material trapped between the two Archean continental cores. The seismic models alone, however, cannot constrain the nature of this anomaly. We investigate the thermal and compositional structure of the Hudson Bay lithosphere using a multi-observable probabilistic inversion technique. This joint inversion uses Rayleigh wave phase velocity data from teleseismic earthquakes and ambient noise, geoid anomalies, surface elevation and heat flow to construct a pseudo-3D model of the crust and upper mantle. Initially a wide range of possible mantle compositions is permitted, and tests are carried out to ascertain whether the lithosphere is stratified with depth. Across the entire Hudson Bay region, low temperatures and a high degree of chemical depletion characterise the mantle lithosphere. Temperature anomalies within the lithosphere are modest, as may be expected from a tectonically-stable region. The base of the thermal lithosphere lies at depths of >250 km, reaching to ~300 km depth in the centre of the Bay. Lithospheric stratification, with a more-depleted upper layer, is best able to explain the geophysical data sets and surface observables. Some regions, where intermediate-period phase velocities are high, require stronger mid-lithospheric depletion. In addition, a narrow region of less-depleted material extends NE-SW across the Bay, likely associated with the trace of the THO collision and the entrapment of juvenile material between the highly-depleted Archean cores.

  5. Derivation of Hunt equation for suspension distribution using Shannon entropy theory

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2017-12-01

    In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.

  6. Action understanding as inverse planning.

    PubMed

    Baker, Chris L; Saxe, Rebecca; Tenenbaum, Joshua B

    2009-12-01

    Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a "teleological stance" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.

  7. Probabilistic delay differential equation modeling of event-related potentials.

    PubMed

    Ostwald, Dirk; Starke, Ludger

    2016-08-01

    "Dynamic causal models" (DCMs) are a promising approach in the analysis of functional neuroimaging data due to their biophysical interpretability and their consolidation of functional-segregative and functional-integrative propositions. In this theoretical note we are concerned with the DCM framework for electroencephalographically recorded event-related potentials (ERP-DCM). Intuitively, ERP-DCM combines deterministic dynamical neural mass models with dipole-based EEG forward models to describe the event-related scalp potential time-series over the entire electrode space. Since its inception, ERP-DCM has been successfully employed to capture the neural underpinnings of a wide range of neurocognitive phenomena. However, in spite of its empirical popularity, the technical literature on ERP-DCM remains somewhat patchy. A number of previous communications have detailed certain aspects of the approach, but no unified and coherent documentation exists. With this technical note, we aim to close this gap and to increase the technical accessibility of ERP-DCM. Specifically, this note makes the following novel contributions: firstly, we provide a unified and coherent review of the mathematical machinery of the latent and forward models constituting ERP-DCM by formulating the approach as a probabilistic latent delay differential equation model. Secondly, we emphasize the probabilistic nature of the model and its variational Bayesian inversion scheme by explicitly deriving the variational free energy function in terms of both the likelihood expectation and variance parameters. Thirdly, we detail and validate the estimation of the model with a special focus on the explicit form of the variational free energy function and introduce a conventional nonlinear optimization scheme for its maximization. Finally, we identify and discuss a number of computational issues which may be addressed in the future development of the approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  9. A Bootstrap-Based Probabilistic Optimization Method to Explore and Efficiently Converge in Solution Spaces of Earthquake Source Parameter Estimation Problems: Application to Volcanic and Tectonic Earthquakes

    NASA Astrophysics Data System (ADS)

    Dahm, T.; Heimann, S.; Isken, M.; Vasyura-Bathke, H.; Kühn, D.; Sudhaus, H.; Kriegerowski, M.; Daout, S.; Steinberg, A.; Cesca, S.

    2017-12-01

    Seismic source and moment tensor waveform inversion is often ill-posed or non-unique if station coverage is poor or signals are weak. Therefore, the interpretation of moment tensors can become difficult, if not the full model space is explored, including all its trade-offs and uncertainties. This is especially true for non-double couple components of weak or shallow earthquakes, as for instance found in volcanic, geothermal or mining environments.We developed a bootstrap-based probabilistic optimization scheme (Grond), which is based on pre-calculated Greens function full waveform databases (e.g. fomosto tool, doi.org/10.5880/GFZ.2.1.2017.001). Grond is able to efficiently explore the full model space, the trade-offs and the uncertainties of source parameters. The program is highly flexible with respect to the adaption to specific problems, the design of objective functions, and the diversity of empirical datasets.It uses an integrated, robust waveform data processing based on a newly developed Python toolbox for seismology (Pyrocko, see Heimann et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.001), and allows for visual inspection of many aspects of the optimization problem. Grond has been applied to the CMT moment tensor inversion using W-phases, to nuclear explosions in Korea, to meteorite atmospheric explosions, to volcano-tectonic events during caldera collapse and to intra-plate volcanic and tectonic crustal events.Grond can be used to optimize simultaneously seismological waveforms, amplitude spectra and static displacements of geodetic data as InSAR and GPS (e.g. KITE, Isken et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.002). We present examples of Grond optimizations to demonstrate the advantage of a full exploration of source parameter uncertainties for interpretation.

  10. Applications of Response Surface-Based Methods to Noise Analysis in the Conceptual Design of Revolutionary Aircraft

    NASA Technical Reports Server (NTRS)

    Hill, Geoffrey A.; Olson, Erik D.

    2004-01-01

    Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.

  11. Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; Simutė, S.

    2017-12-01

    We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.

  12. Process for computing geometric perturbations for probabilistic analysis

    DOEpatents

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  13. Probabilistic joint inversion of waveforms and polarity data for double-couple focal mechanisms of local earthquakes

    NASA Astrophysics Data System (ADS)

    Wéber, Zoltán

    2018-06-01

    Estimating the mechanisms of small (M < 4) earthquakes is quite challenging. A common scenario is that neither the available polarity data alone nor the well predictable near-station seismograms alone are sufficient to obtain reliable focal mechanism solutions for weak events. To handle this situation we introduce here a new method that jointly inverts waveforms and polarity data following a probabilistic approach. The procedure called joint waveform and polarity (JOWAPO) inversion maps the posterior probability density of the model parameters and estimates the maximum likelihood double-couple mechanism, the optimal source depth and the scalar seismic moment of the investigated event. The uncertainties of the solution are described by confidence regions. We have validated the method on two earthquakes for which well-determined focal mechanisms are available. The validation tests show that including waveforms in the inversion considerably reduces the uncertainties of the usually poorly constrained polarity solutions. The JOWAPO method performs best when it applies waveforms from at least two seismic stations. If the number of the polarity data is large enough, even single-station JOWAPO inversion can produce usable solutions. When only a few polarities are available, however, single-station inversion may result in biased mechanisms. In this case some caution must be taken when interpreting the results. We have successfully applied the JOWAPO method to an earthquake in North Hungary, whose mechanism could not be estimated by long-period waveform inversion. Using 17 P-wave polarities and waveforms at two nearby stations, the JOWAPO method produced a well-constrained focal mechanism. The solution is very similar to those obtained previously for four other events that occurred in the same earthquake sequence. The analysed event has a strike-slip mechanism with a P axis oriented approximately along an NE-SW direction.

  14. Bayesian probabilistic approach for inverse source determination from limited and noisy chemical or biological sensor concentration measurements

    NASA Astrophysics Data System (ADS)

    Yee, Eugene

    2007-04-01

    Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.

  15. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  16. A Probabilistic Approach to Quantify the Impact of Uncertainty Propagation in Musculoskeletal Simulations

    PubMed Central

    Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.

    2015-01-01

    Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535

  17. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  18. Characterizing Topology of Probabilistic Biological Networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-09-06

    Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.

  19. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  20. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  1. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  2. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    PubMed

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  3. Airborne EM, Lithology and in-situ Data Used for Quantizing Groundwater Salinity in Zeeland (NL)

    NASA Astrophysics Data System (ADS)

    Meyer, U.; Siemon, B.; van Baaren, E.; Dabekaussen, W.; Delsman, J. R.; Karaoulis, M.; Gunnink, J.; Pauw, P.; Vermaas, T.

    2017-12-01

    In a setting of predominantly saline surface waters in Zeeland, the Netherlands, the only available shallow fresh groundwater is present in the form of freshwater lenses floating on top of the saline groundwater. This fresh water is vital for agricultural, industrial, ecological, water conservation and drinking water functions. An essential first step for managing the usable water properly is to know the present spatial fresh-brackish-saline groundwater distribution. As traditional salinity monitoring is labor-intensive, airborne electromagnetics, which is fast and can cover large areas in short time, is an efficient alternative. A consortium of BGR, Deltares and TNO conducted FRESHEM Zeeland (FREsh Salt groundwater distribution by Helicopter ElectroMagnetic survey in the Province of Zeeland) in 2014-17. An area of more than 2000 square km was surveyed using BGR's helicopter-borne geophysical system totaling to about 9,600 line-km. The HEM data, after inversion to 2.5 Million resistivity-depth models for each of the three 1D inversion procedures applied (Marquardt single site, smooth and sharp laterally constrained inversion), served as base-line information for further interpretation. A probabilistic Monte Carlo approach combines HEM resistivities, 3D lithology model data (GeoTOP), laboratory results (formation factor and surface conductivity) and local in-situ groundwater measurements for the translation of resistivity to Chloride concentration. The resulting 3D voxel model enables stakeholders to implement spatial Chloride concentration in their groundwater models.

  4. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  5. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  6. 3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.

    2017-04-01

    Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.

  7. Model-Free Stochastic Localization of CBRN Releases

    DTIC Science & Technology

    2013-01-01

    Ioannis Ch. Paschalidis,‡ Senior Member, IEEE Abstract—We present a novel two-stage methodology for locating a Chemical, Biological, Radiological, or...Nuclear (CBRN) source in an urban area using a network of sensors. In contrast to earlier work, our approach does not solve an inverse dispersion problem...but relies on data obtained from a simulation of the CBRN dispersion to obtain probabilistic descriptors of sensor measurements under a variety of CBRN

  8. Grid occupancy estimation for environment perception based on belief functions and PCR6

    NASA Astrophysics Data System (ADS)

    Moras, Julien; Dezert, Jean; Pannetier, Benjamin

    2015-05-01

    In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.

  9. Bayesian seismic tomography by parallel interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas

    2014-05-01

    The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the high probability zones of the model space while avoiding the chains to end stuck in a probability maximum. This approach supplies thus a robust way to analyze the tomography imaging uncertainties. The interacting MCMC approach is illustrated on two synthetic examples of tomography of calibration shots such as encountered in induced microseismic studies. On the second application, a wavelet based model parameterization is presented that allows to significantly reduce the dimension of the problem, making thus the algorithm efficient even for a complex velocity model.

  10. Error Discounting in Probabilistic Category Learning

    PubMed Central

    Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.

    2011-01-01

    Some current theories of probabilistic categorization assume that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report two probabilistic-categorization experiments that investigated error discounting by shifting feedback probabilities to new values after different amounts of training. In both experiments, responding gradually became less responsive to errors, and learning was slowed for some time after the feedback shift. Both results are indicative of error discounting. Quantitative modeling of the data revealed that adding a mechanism for error discounting significantly improved the fits of an exemplar-based and a rule-based associative learning model, as well as of a recency-based model of categorization. We conclude that error discounting is an important component of probabilistic learning. PMID:21355666

  11. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  12. Learning Probabilistic Logic Models from Probabilistic Examples

    PubMed Central

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2009-01-01

    Abstract We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples. PMID:19888348

  13. Learning Probabilistic Logic Models from Probabilistic Examples.

    PubMed

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  14. Bayesian inversion of a CRN depth profile to infer Quaternary erosion of the northwestern Campine Plateau (NE Belgium)

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Beerten, Koen; Vanacker, Veerle; Christl, Marcus; Rogiers, Bart; Wouters, Laurent

    2017-07-01

    The rate at which low-lying sandy areas in temperate regions, such as the Campine Plateau (NE Belgium), have been eroding during the Quaternary is a matter of debate. Current knowledge on the average pace of landscape evolution in the Campine area is largely based on geological inferences and modern analogies. We performed a Bayesian inversion of an in situ-produced 10Be concentration depth profile to infer the average long-term erosion rate together with two other parameters: the surface exposure age and the inherited 10Be concentration. Compared to the latest advances in probabilistic inversion of cosmogenic radionuclide (CRN) data, our approach has the following two innovative components: it (1) uses Markov chain Monte Carlo (MCMC) sampling and (2) accounts (under certain assumptions) for the contribution of model errors to posterior uncertainty. To investigate to what extent our approach differs from the state of the art in practice, a comparison against the Bayesian inversion method implemented in the CRONUScalc program is made. Both approaches identify similar maximum a posteriori (MAP) parameter values, but posterior parameter and predictive uncertainty derived using the method taken in CRONUScalc is moderately underestimated. A simple way for producing more consistent uncertainty estimates with the CRONUScalc-like method in the presence of model errors is therefore suggested. Our inferred erosion rate of 39 ± 8. 9 mm kyr-1 (1σ) is relatively large in comparison with landforms that erode under comparable (paleo-)climates elsewhere in the world. We evaluate this value in the light of the erodibility of the substrate and sudden base level lowering during the Middle Pleistocene. A denser sampling scheme of a two-nuclide concentration depth profile would allow for better inferred erosion rate resolution, and including more uncertain parameters in the MCMC inversion.

  15. Extreme wave formation in unidirectional sea due to stochastic wave phase dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Balachandran, Balakumar

    2018-07-01

    The authors consider a stochastic model based on the interaction and phase coupling amongst wave components that are modified envelope soliton solutions to the nonlinear Schrödinger equation. A probabilistic study is carried out and the resulting findings are compared with ocean wave field observations and laboratory experimental results. The wave height probability distribution obtained from the model is found to match well with prior data in the large wave height region. From the eigenvalue spectrum obtained through the Inverse Scattering Transform, it is revealed that the deep-water wave groups move at a speed different from the linear group speed, which justifies the inclusion of phase correction to the envelope solitary wave components. It is determined that phase synchronization amongst elementary solitary wave components can be critical for the formation of extreme waves in unidirectional sea states.

  16. Solving probability reasoning based on DNA strand displacement and probability modules.

    PubMed

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. QUANTIFYING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN USING A PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL

    EPA Science Inventory

    To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...

  18. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  19. Modeling Drinking Behavior Progression in Youth: a Non-identified Probability Discrete Event System Using Cross-sectional Data

    PubMed Central

    Hu, Xingdi; Chen, Xinguang; Cook, Robert L.; Chen, Ding-Geng; Okafor, Chukwuemeka

    2016-01-01

    Background The probabilistic discrete event systems (PDES) method provides a promising approach to study dynamics of underage drinking using cross-sectional data. However, the utility of this approach is often limited because the constructed PDES model is often non-identifiable. The purpose of the current study is to attempt a new method to solve the model. Methods A PDES-based model of alcohol use behavior was developed with four progression stages (never-drinkers [ND], light/moderate-drinker [LMD], heavy-drinker [HD], and ex-drinker [XD]) linked with 13 possible transition paths. We tested the proposed model with data for participants aged 12–21 from the 2012 National Survey on Drug Use and Health (NSDUH). The Moore-Penrose (M-P) generalized inverse matrix method was applied to solve the proposed model. Results Annual transitional probabilities by age groups for the 13 drinking progression pathways were successfully estimated with the M-P generalized inverse matrix approach. Result from our analysis indicates an inverse “J” shape curve characterizing pattern of experimental use of alcohol from adolescence to young adulthood. We also observed a dramatic increase for the initiation of LMD and HD after age 18 and a sharp decline in quitting light and heavy drinking. Conclusion Our findings are consistent with the developmental perspective regarding the dynamics of underage drinking, demonstrating the utility of the M-P method in obtaining a unique solution for the partially-observed PDES drinking behavior model. The M-P approach we tested in this study will facilitate the use of the PDES approach to examine many health behaviors with the widely available cross-sectional data. PMID:26511344

  20. Use seismic colored inversion and power law committee machine based on imperial competitive algorithm for improving porosity prediction in a heterogeneous reservoir

    NASA Astrophysics Data System (ADS)

    Ansari, Hamid Reza

    2014-09-01

    In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.

  1. Probabilistic data integration and computational complexity

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.

  2. Combining multiple decisions: applications to bioinformatics

    NASA Astrophysics Data System (ADS)

    Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.

  3. A novel probabilistic framework for event-based speech recognition

    NASA Astrophysics Data System (ADS)

    Juneja, Amit; Espy-Wilson, Carol

    2003-10-01

    One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.

  4. Probabilistic Assessment of Soil Moisture using C-band Quad-polarized Remote Sensing Data from RISAT1

    NASA Astrophysics Data System (ADS)

    Pal, Manali; Suman, Mayank; Das, Sarit Kumar; Maity, Rajib

    2017-04-01

    Information on spatio-temporal distribution of surface Soil Moisture Content (SMC) is essential in several hydrological, meteorological and agricultural applications. There has been increasing importance of microwave active remote sensing data for large-scale estimation of surface SMC because of its ability to monitor spatial and temporal variation of surface SMC at regional, continental and global scale at a reasonably fine spatial and temporal resolution. The use of Synthetic Aperture Radar (SAR) is highly potential for catchment-scale applications due to high spatial resolution (˜10-20 m) both for vegetated and bare soil surface as well as because of its all-weather and day and night characteristics. However, one prime disadvantage of SAR is that their signal is subjective to SMC along with Land Use Land Cover (LULC) and surface roughness conditions, making the retrieval of SMC from SAR data an "ill-posed" problem. Moreover, the quantification of uncertainty due to inappropriate surface roughness characterization, soil texture, inversion techniques etc. even in the latest established retrieval methods, is little explored. This paper reports a recently developed method to estimate the surface SMC with probabilistic assessment of uncertainty associated with the estimation (Pal et al., 2016). Quad-polarized SAR data from Radar Imaging Satellite1 (RISAT1), launched in 2012 by Indian Space Research Organization (ISRO) and information on LULC regarding bareland and vegetated land (<30 cm height) are used in estimation using the potential of multivariate probabilistic assessment through copulas. The salient features of the study are: 1) development of a combined index to understand the role of all the quad-polarized backscattering coefficients and soil texture information in SMC estimation; 2) applicability of the model for different incidence angles using normalized incidence angle theory proposed by Zibri et al. (2005); and 3) assessment of uncertainty range of the estimated SMC. Supervised Principal Component Analysis (SPCA) is used for development of combined index and Frank copula is found to be the best-fit copula. The developed model is validated with the field soil moisture values over 334 monitoring points within the study area and used for development of a soil moisture map. While the performance is promising, the model is applicable only for bare and vegetated land. References: Pal, M., Maity, R., Suman, M., Das, S.K., Patel, P., and Srivastava, H.S., (2016). "Satellite-Based Probabilistic Assessment of Soil Moisture Using C-Band Quad-Polarized RISAT1 Data." IEEE Transactions on Geoscience and Remote Sensing, In Press, doi:10.1109/TGRS.2016.2623378. Zribi, M., Baghdadi, N., Holah, N., and Fafin, O., (2005)."New methodology for soil surface moisture estimation and its application to ENVISAT-ASAR multi-incidence data inversion." Remote Sensing of Environment, vol. 96, nos. 3-4, pp. 485-496.

  5. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less

  6. Probabilistic seismic history matching using binary images

    NASA Astrophysics Data System (ADS)

    Davolio, Alessandra; Schiozer, Denis Jose

    2018-02-01

    Currently, the goal of history-matching procedures is not only to provide a model matching any observed data but also to generate multiple matched models to properly handle uncertainties. One such approach is a probabilistic history-matching methodology based on the discrete Latin Hypercube sampling algorithm, proposed in previous works, which was particularly efficient for matching well data (production rates and pressure). 4D seismic (4DS) data have been increasingly included into history-matching procedures. A key issue in seismic history matching (SHM) is to transfer data into a common domain: impedance, amplitude or pressure, and saturation. In any case, seismic inversions and/or modeling are required, which can be time consuming. An alternative to avoid these procedures is using binary images in SHM as they allow the shape, rather than the physical values, of observed anomalies to be matched. This work presents the incorporation of binary images in SHM within the aforementioned probabilistic history matching. The application was performed with real data from a segment of the Norne benchmark case that presents strong 4D anomalies, including softening signals due to pressure build up. The binary images are used to match the pressurized zones observed in time-lapse data. Three history matchings were conducted using: only well data, well and 4DS data, and only 4DS. The methodology is very flexible and successfully utilized the addition of binary images for seismic objective functions. Results proved the good convergence of the method in few iterations for all three cases. The matched models of the first two cases provided the best results, with similar well matching quality. The second case provided models presenting pore pressure changes according to the expected dynamic behavior (pressurized zones) observed on 4DS data. The use of binary images in SHM is relatively new with few examples in the literature. This work enriches this discussion by presenting a new application to match pressure in a reservoir segment with complex pressure behavior.

  7. PubMed related articles: a probabilistic topic-based model for content similarity

    PubMed Central

    Lin, Jimmy; Wilbur, W John

    2007-01-01

    Background We present a probabilistic topic-based model for content similarity called pmra that underlies the related article search feature in PubMed. Whether or not a document is about a particular topic is computed from term frequencies, modeled as Poisson distributions. Unlike previous probabilistic retrieval models, we do not attempt to estimate relevance–but rather our focus is "relatedness", the probability that a user would want to examine a particular document given known interest in another. We also describe a novel technique for estimating parameters that does not require human relevance judgments; instead, the process is based on the existence of MeSH ® in MEDLINE ®. Results The pmra retrieval model was compared against bm25, a competitive probabilistic model that shares theoretical similarities. Experiments using the test collection from the TREC 2005 genomics track shows a small but statistically significant improvement of pmra over bm25 in terms of precision. Conclusion Our experiments suggest that the pmra model provides an effective ranking algorithm for related article search. PMID:17971238

  8. Imaging the Earth's anisotropic structure with Bayesian Inversion of fundamental and higher mode surface-wave dispersion data

    NASA Astrophysics Data System (ADS)

    Ravenna, Matteo; Lebedev, Sergei; Celli, Nicolas

    2017-04-01

    We develop a Markov Chain Monte Carlo inversion of fundamental and higher mode phase-velocity curves for radially and azimuthally anisotropic structure of the crust and upper mantle. In the inversions of Rayleigh- and Love-wave dispersion curves for radially anisotropic structure, we obtain probabilistic 1D radially anisotropic shear-velocity profiles of the isotropic average Vs and anisotropy (or Vsv and Vsh) as functions of depth. In the inversions for azimuthal anisotropy, Rayleigh-wave dispersion curves at different azimuths are inverted for the vertically polarized shear-velocity structure (Vsv) and the 2-phi component of azimuthal anisotropy. The strength and originality of the method is in its fully non-linear approach. Each model realization is computed using exact forward calculations. The uncertainty of the models is a part of the output. In the inversions for azimuthal anisotropy, in particular, the computation of the forward problem is performed separately at different azimuths, with no linear approximations on the relation of the Earth's elastic parameters to surface wave phase velocities. The computations are performed in parallel in order reduce the computing time. We compare inversions of the fundamental mode phase-velocity curves alone with inversions that also include overtones. The addition of higher modes enhances the resolving power of the anisotropic structure of the deep upper mantle. We apply the inversion method to phase-velocity curves in a few regions, including the Hangai dome region in Mongolia. Our models provide constraints on the Moho depth, the Lithosphere-Asthenosphere Boundary, and the alignment of the anisotropic fabric and the direction of current and past flow, from the crust down to the deep asthenosphere.

  9. Building a high-resolution T2-weighted MR-based probabilistic model of tumor occurrence in the prostate.

    PubMed

    Nagarajan, Mahesh B; Raman, Steven S; Lo, Pechin; Lin, Wei-Chan; Khoshnoodi, Pooria; Sayre, James W; Ramakrishna, Bharath; Ahuja, Preeti; Huang, Jiaoti; Margolis, Daniel J A; Lu, David S K; Reiter, Robert E; Goldin, Jonathan G; Brown, Matthew S; Enzmann, Dieter R

    2018-02-19

    We present a method for generating a T2 MR-based probabilistic model of tumor occurrence in the prostate to guide the selection of anatomical sites for targeted biopsies and serve as a diagnostic tool to aid radiological evaluation of prostate cancer. In our study, the prostate and any radiological findings within were segmented retrospectively on 3D T2-weighted MR images of 266 subjects who underwent radical prostatectomy. Subsequent histopathological analysis determined both the ground truth and the Gleason grade of the tumors. A randomly chosen subset of 19 subjects was used to generate a multi-subject-derived prostate template. Subsequently, a cascading registration algorithm involving both affine and non-rigid B-spline transforms was used to register the prostate of every subject to the template. Corresponding transformation of radiological findings yielded a population-based probabilistic model of tumor occurrence. The quality of our probabilistic model building approach was statistically evaluated by measuring the proportion of correct placements of tumors in the prostate template, i.e., the number of tumors that maintained their anatomical location within the prostate after their transformation into the prostate template space. Probabilistic model built with tumors deemed clinically significant demonstrated a heterogeneous distribution of tumors, with higher likelihood of tumor occurrence at the mid-gland anterior transition zone and the base-to-mid-gland posterior peripheral zones. Of 250 MR lesions analyzed, 248 maintained their original anatomical location with respect to the prostate zones after transformation to the prostate. We present a robust method for generating a probabilistic model of tumor occurrence in the prostate that could aid clinical decision making, such as selection of anatomical sites for MR-guided prostate biopsies.

  10. Bayesian Modeling of Perceived Surface Slant from Actively-Generated and Passively-Observed Optic Flow

    PubMed Central

    Caudek, Corrado; Fantoni, Carlo; Domini, Fulvio

    2011-01-01

    We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observer's motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information. PMID:21533197

  11. Superposition-Based Analysis of First-Order Probabilistic Timed Automata

    NASA Astrophysics Data System (ADS)

    Fietzke, Arnaud; Hermanns, Holger; Weidenbach, Christoph

    This paper discusses the analysis of first-order probabilistic timed automata (FPTA) by a combination of hierarchic first-order superposition-based theorem proving and probabilistic model checking. We develop the overall semantics of FPTAs and prove soundness and completeness of our method for reachability properties. Basically, we decompose FPTAs into their time plus first-order logic aspects on the one hand, and their probabilistic aspects on the other hand. Then we exploit the time plus first-order behavior by hierarchic superposition over linear arithmetic. The result of this analysis is the basis for the construction of a reachability equivalent (to the original FPTA) probabilistic timed automaton to which probabilistic model checking is finally applied. The hierarchic superposition calculus required for the analysis is sound and complete on the first-order formulas generated from FPTAs. It even works well in practice. We illustrate the potential behind it with a real-life DHCP protocol example, which we analyze by means of tool chain support.

  12. Three-dimensional Probabilistic Earthquake Location Applied to 2002-2003 Mt. Etna Eruption

    NASA Astrophysics Data System (ADS)

    Mostaccio, A.; Tuve', T.; Zuccarello, L.; Patane', D.; Saccorotti, G.; D'Agostino, M.

    2005-12-01

    Recorded seismicity for the Mt. Etna volcano, occurred during the 2002-2003 eruption, has been relocated using a probabilistic, non-linear, earthquake location approach. We used the software package NonLinLoc (Lomax et al., 2000) adopting the 3D velocity model obtained by Cocina et al., 2005. We applied our data through different algorithms: (1) via a grid-search; (2) via a Metropolis-Gibbs; and (3) via an Oct-tree. The Oct-Tree algorithm gives efficient, faster and accurate mapping of the PDF (Probability Density Function) of the earthquake location problem. More than 300 seismic events were analyzed in order to compare non-linear location results with the ones obtained by using traditional, linearized earthquake location algorithm such as Hypoellipse, and a 3D linearized inversion (Thurber, 1983). Moreover, we compare 38 focal mechanisms, chosen following stricta criteria selection, with the ones obtained by the 3D and 1D results. Although the presented approach is more of a traditional relocation application, probabilistic earthquake location could be used in routinely survey.

  13. A probabilistic and continuous model of protein conformational space for template-free modeling.

    PubMed

    Zhao, Feng; Peng, Jian; Debartolo, Joe; Freed, Karl F; Sosnick, Tobin R; Xu, Jinbo

    2010-06-01

    One of the major challenges with protein template-free modeling is an efficient sampling algorithm that can explore a huge conformation space quickly. The popular fragment assembly method constructs a conformation by stringing together short fragments extracted from the Protein Data Base (PDB). The discrete nature of this method may limit generated conformations to a subspace in which the native fold does not belong. Another worry is that a protein with really new fold may contain some fragments not in the PDB. This article presents a probabilistic model of protein conformational space to overcome the above two limitations. This probabilistic model employs directional statistics to model the distribution of backbone angles and 2(nd)-order Conditional Random Fields (CRFs) to describe sequence-angle relationship. Using this probabilistic model, we can sample protein conformations in a continuous space, as opposed to the widely used fragment assembly and lattice model methods that work in a discrete space. We show that when coupled with a simple energy function, this probabilistic method compares favorably with the fragment assembly method in the blind CASP8 evaluation, especially on alpha or small beta proteins. To our knowledge, this is the first probabilistic method that can search conformations in a continuous space and achieves favorable performance. Our method also generated three-dimensional (3D) models better than template-based methods for a couple of CASP8 hard targets. The method described in this article can also be applied to protein loop modeling, model refinement, and even RNA tertiary structure prediction.

  14. A probabilistic maintenance model for diesel engines

    NASA Astrophysics Data System (ADS)

    Pathirana, Shan; Abeygunawardane, Saranga Kumudu

    2018-02-01

    In this paper, a probabilistic maintenance model is developed for inspection based preventive maintenance of diesel engines based on the practical model concepts discussed in the literature. Developed model is solved using real data obtained from inspection and maintenance histories of diesel engines and experts' views. Reliability indices and costs were calculated for the present maintenance policy of diesel engines. A sensitivity analysis is conducted to observe the effect of inspection based preventive maintenance on the life cycle cost of diesel engines.

  15. A robust pseudo-inverse spectral filter applied to the Earth Radiation Budget Experiment (ERBE) scanning channels

    NASA Technical Reports Server (NTRS)

    Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.

    1984-01-01

    Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.

  16. Probabilistic modeling of discourse-aware sentence processing.

    PubMed

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  17. Characterizing the topology of probabilistic biological networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Biological interactions are often uncertain events, that may or may not take place with some probability. This uncertainty leads to a massive number of alternative interaction topologies for each such network. The existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. In this paper, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. Using our mathematical representation, we develop a method that can accurately describe the degree distribution of such networks. We also take one more step and extend our method to accurately compute the joint-degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. Our method works quickly even for entire protein-protein interaction (PPI) networks. It also helps us find an adequate mathematical model using MLE. We perform a comparative study of node-degree and joint-degree distributions in two types of biological networks: the classical deterministic networks and the more flexible probabilistic networks. Our results confirm that power-law and log-normal models best describe degree distributions for both probabilistic and deterministic networks. Moreover, the inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected. We also show that probabilistic networks are more robust for node-degree distribution computation than the deterministic ones. all the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/projects/probNet/.

  18. Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.

    PubMed

    Mørk, Søren; Holmes, Ian

    2012-03-01

    Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.

  19. Probabilistic commodity-flow-based focusing of monitoring activities to facilitate early detection of Phytophthora ramorum outbreaks

    Treesearch

    Steven C. McKelvey; William D. Smith; Frank Koch

    2012-01-01

    This project summary describes a probabilistic model developed with funding support from the Forest Health Monitoring Program of the Forest Service, U.S. Department of Agriculture (BaseEM Project SO-R-08-01). The model has been implemented in SODBuster, a standalone software package developed using the Java software development kit from Sun Microsystems.

  20. Trait-Dependent Biogeography: (Re)Integrating Biology into Probabilistic Historical Biogeographical Models.

    PubMed

    Sukumaran, Jeet; Knowles, L Lacey

    2018-06-01

    The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. 3-D model-based Bayesian classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soenneland, L.; Tenneboe, P.; Gehrmann, T.

    1994-12-31

    The challenging task of the interpreter is to integrate different pieces of information and combine them into an earth model. The sophistication level of this earth model might vary from the simplest geometrical description to the most complex set of reservoir parameters related to the geometrical description. Obviously the sophistication level also depend on the completeness of the available information. The authors describe the interpreter`s task as a mapping between the observation space and the model space. The information available to the interpreter exists in observation space and the task is to infer a model in model-space. It is well-knownmore » that this inversion problem is non-unique. Therefore any attempt to find a solution depend son constraints being added in some manner. The solution will obviously depend on which constraints are introduced and it would be desirable to allow the interpreter to modify the constraints in a problem-dependent manner. They will present a probabilistic framework that gives the interpreter the tools to integrate the different types of information and produce constrained solutions. The constraints can be adapted to the problem at hand.« less

  2. Probabilistic Seismic Risk Model for Western Balkans

    NASA Astrophysics Data System (ADS)

    Stejskal, Vladimir; Lorenzo, Francisco; Pousse, Guillaume; Radovanovic, Slavica; Pekevski, Lazo; Dojcinovski, Dragi; Lokin, Petar; Petronijevic, Mira; Sipka, Vesna

    2010-05-01

    A probabilistic seismic risk model for insurance and reinsurance purposes is presented for an area of Western Balkans, covering former Yugoslavia and Albania. This territory experienced many severe earthquakes during past centuries producing significant damage to many population centres in the region. The highest hazard is related to external Dinarides, namely to the collision zone of the Adriatic plate. The model is based on a unified catalogue for the region and a seismic source model consisting of more than 30 zones covering all the three main structural units - Southern Alps, Dinarides and the south-western margin of the Pannonian Basin. A probabilistic methodology using Monte Carlo simulation was applied to generate the hazard component of the model. Unique set of damage functions based on both loss experience and engineering assessments is used to convert the modelled ground motion severity into the monetary loss.

  3. A Practical Probabilistic Graphical Modeling Tool for Weighing Ecological Risk-Based Evidence

    EPA Science Inventory

    Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for e...

  4. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  5. Hierarchical Probabilistic Inference of Cosmic Shear

    NASA Astrophysics Data System (ADS)

    Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin

    2015-07-01

    Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.

  6. Decomposing biodiversity data using the Latent Dirichlet Allocation model, a probabilistic multivariate statistical method

    Treesearch

    Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave

    2014-01-01

    We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...

  7. A probabilistic NF2 relational algebra for integrated information retrieval and database systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuhr, N.; Roelleke, T.

    The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less

  8. Controlled source electromagnetic data analysis with seismic constraints and rigorous uncertainty estimation in the Black Sea

    NASA Astrophysics Data System (ADS)

    Gehrmann, R. A. S.; Schwalenberg, K.; Hölz, S.; Zander, T.; Dettmer, J.; Bialas, J.

    2016-12-01

    In 2014 an interdisciplinary survey was conducted as part of the German SUGAR project in the Western Black Sea targeting gas hydrate occurrences in the Danube Delta. Marine controlled source electromagnetic (CSEM) data were acquired with an inline seafloor-towed array (BGR), and a two-polarization horizontal ocean-bottom source and receiver configuration (GEOMAR). The CSEM data are co-located with high-resolution 2-D and 3-D seismic reflection data (GEOMAR). We present results from 2-D regularized inversion (MARE2DEM by Kerry Key), which provides a smooth model of the electrical resistivity distribution beneath the source and multiple receivers. The 2-D approach includes seafloor topography and structural constraints from seismic data. We estimate uncertainties from the regularized inversion and compare them to 1-D Bayesian inversion results. The probabilistic inversion for a layered subsurface treats the parameter values and the number of layers as unknown by applying reversible-jump Markov-chain Monte Carlo sampling. A non-diagonal data covariance matrix obtained from residual error analysis accounts for correlated errors. The resulting resistivity models show generally high resistivity values between 3 and 10 Ωm on average which can be partly attributed to depleted pore water salinities due to sea-level low stands in the past, and locally up to 30 Ωm which is likely caused by gas hydrates. At the base of the gas hydrate stability zone resistivities rise up to more than 100 Ωm which could be due to gas hydrate as well as a layer of free gas underneath. However, the deeper parts also show the largest model parameter uncertainties. Archie's Law is used to derive estimates of the gas hydrate saturation, which vary between 30 and 80% within the anomalous layers considering salinity and porosity profiles from a distant DSDP bore hole.

  9. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  10. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.

  11. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing.

    PubMed

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models.

  12. Perception of Risk and Terrorism-Related Behavior Change: Dual Influences of Probabilistic Reasoning and Reality Testing

    PubMed Central

    Denovan, Andrew; Dagnall, Neil; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2017-01-01

    The present study assessed the degree to which probabilistic reasoning performance and thinking style influenced perception of risk and self-reported levels of terrorism-related behavior change. A sample of 263 respondents, recruited via convenience sampling, completed a series of measures comprising probabilistic reasoning tasks (perception of randomness, base rate, probability, and conjunction fallacy), the Reality Testing subscale of the Inventory of Personality Organization (IPO-RT), the Domain-Specific Risk-Taking Scale, and a terrorism-related behavior change scale. Structural equation modeling examined three progressive models. Firstly, the Independence Model assumed that probabilistic reasoning, perception of risk and reality testing independently predicted terrorism-related behavior change. Secondly, the Mediation Model supposed that probabilistic reasoning and reality testing correlated, and indirectly predicted terrorism-related behavior change through perception of risk. Lastly, the Dual-Influence Model proposed that probabilistic reasoning indirectly predicted terrorism-related behavior change via perception of risk, independent of reality testing. Results indicated that performance on probabilistic reasoning tasks most strongly predicted perception of risk, and preference for an intuitive thinking style (measured by the IPO-RT) best explained terrorism-related behavior change. The combination of perception of risk with probabilistic reasoning ability in the Dual-Influence Model enhanced the predictive power of the analytical-rational route, with conjunction fallacy having a significant indirect effect on terrorism-related behavior change via perception of risk. The Dual-Influence Model possessed superior fit and reported similar predictive relations between intuitive-experiential and analytical-rational routes and terrorism-related behavior change. The discussion critically examines these findings in relation to dual-processing frameworks. This includes considering the limitations of current operationalisations and recommendations for future research that align outcomes and subsequent work more closely to specific dual-process models. PMID:29062288

  13. Ground mapping resolution accuracy of a scanning radiometer from a geostationary satellite.

    PubMed

    Stremler, F G; Khalil, M A; Parent, R J

    1977-06-01

    Measures of the spatial and spatial rate (frequency) mapping of scanned visual imagery from an earth reference system to a spin-scan geostationary satellite are examined. Mapping distortions and coordinate inversions to correct for these distortions are formulated in terms of geometric transformations between earth and satellite frames of reference. Probabilistic methods are used to develop relations for obtainable mapping resolution when coordinate inversions are employed.

  14. A note on probabilistic models over strings: the linear algebra approach.

    PubMed

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  15. Sensitivity of a Bayesian atmospheric-transport inversion model to spatio-temporal sensor resolution applied to the 2006 North Korean nuclear test

    NASA Astrophysics Data System (ADS)

    Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.

    2017-12-01

    Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.

  16. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  17. SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating

    PubMed Central

    Lee, Young-Joo; Cho, Soojin

    2016-01-01

    Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125

  18. IceChrono v1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, Frédéric

    2015-04-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores is essential to interpret the paleo records that they contain, but it is a complicated problem since it involves different dating methods. Here I present IceChrono v1, a new probabilistic model to combine different kinds of chronological information to obtain a common and optimized chronology for several ice cores, as well as its uncertainty. It is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the vertical thinning function. The chronological information used are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and gas dated horizons, ice and gas dated depth intervals, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air), stratigraphic links in between ice cores (ice-ice, air-air or mix ice-air and air-ice links). The optimization problem is formulated as a least squares problems, that is, all densities of probabilities are assumed gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono is similar in scope to the Datice model, but has differences from the mathematical, numerical and programming point of views. I apply IceChrono on an AICC2012-like experiment and I find similar results than Datice within a few centuries, which is a confirmation of both IceChrono and Datice codes. IceChrono v1 is freely available under the GPL v3 open source license.

  19. Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.

    2016-04-01

    We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.

  20. A unified probabilistic approach to improve spelling in an event-related potential-based brain-computer interface.

    PubMed

    Kindermans, Pieter-Jan; Verschore, Hannes; Schrauwen, Benjamin

    2013-10-01

    In recent years, in an attempt to maximize performance, machine learning approaches for event-related potential (ERP) spelling have become more and more complex. In this paper, we have taken a step back as we wanted to improve the performance without building an overly complex model, that cannot be used by the community. Our research resulted in a unified probabilistic model for ERP spelling, which is based on only three assumptions and incorporates language information. On top of that, the probabilistic nature of our classifier yields a natural dynamic stopping strategy. Furthermore, our method uses the same parameters across 25 subjects from three different datasets. We show that our classifier, when enhanced with language models and dynamic stopping, improves the spelling speed and accuracy drastically. Additionally, we would like to point out that as our model is entirely probabilistic, it can easily be used as the foundation for complex systems in future work. All our experiments are executed on publicly available datasets to allow for future comparison with similar techniques.

  1. Probabilistic interpretation of Peelle's pertinent puzzle and its resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Kenneth M.; Kawano, T.; Talou, P.

    2004-01-01

    Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less

  2. Probabilistic Interpretation of Peelle's Pertinent Puzzle and its Resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Kenneth M.; Kawano, Toshihiko; Talou, Patrick

    2005-05-24

    Peelle's Pertinent Puzzle (PPP) states a seemingly plausible set of measurements with their covariance matrix, which produce an implausible answer. To answer the PPP question, we describe a reasonable experimental situation that is consistent with the PPP solution. The confusion surrounding the PPP arises in part because of its imprecise statement, which permits to a variety of interpretations and resulting answers, some of which seem implausible. We emphasize the importance of basing the analysis on an unambiguous probabilistic model that reflects the experimental situation. We present several different models of how the measurements quoted in the PPP problem could bemore » obtained, and interpret their solution in terms of a detailed probabilistic analysis. We suggest a probabilistic approach to handling uncertainties about which model to use.« less

  3. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates.

    PubMed

    Wagner, Peter J

    2012-02-23

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.

  4. Quantification and Segmentation of Brain Tissues from MR Images: A Probabilistic Neural Network Approach

    PubMed Central

    Wang, Yue; Adalý, Tülay; Kung, Sun-Yuan; Szabo, Zsolt

    2007-01-01

    This paper presents a probabilistic neural network based technique for unsupervised quantification and segmentation of brain tissues from magnetic resonance images. It is shown that this problem can be solved by distribution learning and relaxation labeling, resulting in an efficient method that may be particularly useful in quantifying and segmenting abnormal brain tissues where the number of tissue types is unknown and the distributions of tissue types heavily overlap. The new technique uses suitable statistical models for both the pixel and context images and formulates the problem in terms of model-histogram fitting and global consistency labeling. The quantification is achieved by probabilistic self-organizing mixtures and the segmentation by a probabilistic constraint relaxation network. The experimental results show the efficient and robust performance of the new algorithm and that it outperforms the conventional classification based approaches. PMID:18172510

  5. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  6. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896

  7. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  8. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  9. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  10. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  11. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  12. Probabilistic modeling of bifurcations in single-cell gene expression data using a Bayesian mixture of factor analyzers.

    PubMed

    Campbell, Kieran R; Yau, Christopher

    2017-03-15

    Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.

  13. Localisation of an Unknown Number of Land Mines Using a Network of Vapour Detectors

    PubMed Central

    Chhadé, Hiba Haj; Abdallah, Fahed; Mougharbel, Imad; Gning, Amadou; Julier, Simon; Mihaylova, Lyudmila

    2014-01-01

    We consider the problem of localising an unknown number of land mines using concentration information provided by a wireless sensor network. A number of vapour sensors/detectors, deployed in the region of interest, are able to detect the concentration of the explosive vapours, emanating from buried land mines. The collected data is communicated to a fusion centre. Using a model for the transport of the explosive chemicals in the air, we determine the unknown number of sources using a Principal Component Analysis (PCA)-based technique. We also formulate the inverse problem of determining the positions and emission rates of the land mines using concentration measurements provided by the wireless sensor network. We present a solution for this problem based on a probabilistic Bayesian technique using a Markov chain Monte Carlo sampling scheme, and we compare it to the least squares optimisation approach. Experiments conducted on simulated data show the effectiveness of the proposed approach. PMID:25384008

  14. Probabilistic analysis for fatigue strength degradation of materials

    NASA Technical Reports Server (NTRS)

    Royce, Lola

    1989-01-01

    This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.

  15. Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networksmore » and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.« less

  16. HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Michael D.; Dawson, William A.; Hogg, David W.

    2015-07-01

    Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxymore » properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.« less

  17. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    NASA Astrophysics Data System (ADS)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  18. Probabilistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  19. Probabilistic drug connectivity mapping

    PubMed Central

    2014-01-01

    Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351

  20. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  1. Generalized probabilistic scale space for image restoration.

    PubMed

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  2. Probabilistic Based Modeling and Simulation Assessment

    DTIC Science & Technology

    2010-06-01

    different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the probability of injury...variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment impact, and...first area of focus is to develop a methodology to integrate probabilistic analysis into finite element analysis of vehicle collisions and blast . The

  3. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation.

    PubMed

    Pearce, Marcus T

    2018-05-11

    Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  4. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates

    PubMed Central

    Wagner, Peter J.

    2012-01-01

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266

  5. Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Rayner, P. J.

    2017-12-01

    The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.

  6. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  7. Probabilistic estimation of residential air exchange rates for ...

    EPA Pesticide Factsheets

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  8. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    PubMed

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Probabilistic modeling of anatomical variability using a low dimensional parameterization of diffeomorphisms.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2017-10-01

    We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  11. Microseismic monitoring of soft-rock landslide: contribution of a 3D velocity model for the location of seismic sources.

    NASA Astrophysics Data System (ADS)

    Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter

    2015-04-01

    Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.

  12. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  13. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    PubMed Central

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  14. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  15. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  17. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  18. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  19. Lung Cancer Assistant: a hybrid clinical decision support application for lung cancer care.

    PubMed

    Sesen, M Berkan; Peake, Michael D; Banares-Alcantara, Rene; Tse, Donald; Kadir, Timor; Stanley, Roz; Gleeson, Fergus; Brady, Michael

    2014-09-06

    Multidisciplinary team (MDT) meetings are becoming the model of care for cancer patients worldwide. While MDTs have improved the quality of cancer care, the meetings impose substantial time pressure on the members, who generally attend several such MDTs. We describe Lung Cancer Assistant (LCA), a clinical decision support (CDS) prototype designed to assist the experts in the treatment selection decisions in the lung cancer MDTs. A novel feature of LCA is its ability to provide rule-based and probabilistic decision support within a single platform. The guideline-based CDS is based on clinical guideline rules, while the probabilistic CDS is based on a Bayesian network trained on the English Lung Cancer Audit Database (LUCADA). We assess rule-based and probabilistic recommendations based on their concordances with the treatments recorded in LUCADA. Our results reveal that the guideline rule-based recommendations perform well in simulating the recorded treatments with exact and partial concordance rates of 0.57 and 0.79, respectively. On the other hand, the exact and partial concordance rates achieved with probabilistic results are relatively poorer with 0.27 and 0.76. However, probabilistic decision support fulfils a complementary role in providing accurate survival estimations. Compared to recorded treatments, both CDS approaches promote higher resection rates and multimodality treatments.

  20. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

    PubMed Central

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions. PMID:26089862

  1. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations.

    PubMed

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  2. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Practical guidance on representing the heteroscedasticity of residual errors of hydrological predictions

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George

    2016-04-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).

  4. The research on medical image classification algorithm based on PLSA-BOW model.

    PubMed

    Cao, C H; Cao, H L

    2016-04-29

    With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.

  5. From cyclone tracks to the costs of European winter storms: A probabilistic loss assessment model

    NASA Astrophysics Data System (ADS)

    Renggli, Dominik; Corti, Thierry; Reese, Stefan; Wueest, Marc; Viktor, Elisabeth; Zimmerli, Peter

    2014-05-01

    The quantitative assessment of the potential losses of European winter storms is essential for the economic viability of a global reinsurance company. For this purpose, reinsurance companies generally use probabilistic loss assessment models. This work presents an innovative approach to develop physically meaningful probabilistic events for Swiss Re's new European winter storm loss model. The meteorological hazard component of the new model is based on cyclone and windstorm tracks identified in the 20th Century Reanalysis data. The knowledge of the evolution of winter storms both in time and space allows the physically meaningful perturbation of properties of historical events (e.g. track, intensity). The perturbation includes a random element but also takes the local climatology and the evolution of the historical event into account. The low-resolution wind footprints taken from 20th Century Reanalysis are processed by a statistical-dynamical downscaling to generate high-resolution footprints of the historical and probabilistic winter storm events. Downscaling transfer functions are generated using ENSEMBLES regional climate model data. The result is a set of reliable probabilistic events representing thousands of years. The event set is then combined with country- and risk-specific vulnerability functions and detailed market- or client-specific exposure information to compute (re-)insurance risk premiums.

  6. Fatigue crack growth model RANDOM2 user manual. Appendix 1: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN program RANDOM2 is presented in the form of a user's manual. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Details of the theoretical background, input data instructions, and a sample problem illustrating the use of the program are included.

  7. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  8. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  9. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  10. Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.

    PubMed

    Frommholz, Ingo; Roelleke, Thomas

    2016-01-01

    Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.

  11. Stochastic model for fatigue crack size and cost effective design decisions. [for aerospace structures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1975-01-01

    This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.

  12. Probabilistic choice models in health-state valuation research: background, theories, assumptions and applications.

    PubMed

    Arons, Alexander M M; Krabbe, Paul F M

    2013-02-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the area of health evaluation. They increasingly use discrete choice models based on random utility theory to derive values for healthcare goods or services. Recent attempts have been made to use discrete choice models as an alternative method to derive values for health states. In this article, various probabilistic choice models are described according to their underlying theory. A historical overview traces their development and applications in diverse fields. The discussion highlights some theoretical and technical aspects of the choice models and their similarity and dissimilarity. The objective of the article is to elucidate the position of each model and their applications for health-state valuation.

  13. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  14. Probabilistic flood damage modelling at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2014-05-01

    Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.

  15. Probabilistic distributions of pinhole defects in atomic layer deposited films on polymeric substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yersak, Alexander S., E-mail: alexander.yersak@colorado.edu; Lee, Yung-Cheng

    Pinhole defects in atomic layer deposition (ALD) coatings were measured in an area of 30 cm{sup 2} in an ALD reactor, and these defects were represented by a probabilistic cluster model instead of a single defect density value with number of defects over area. With the probabilistic cluster model, the pinhole defects were simulated over a manufacturing scale surface area of ∼1 m{sup 2}. Large-area pinhole defect simulations were used to develop an improved and enhanced design method for ALD-based devices. A flexible thermal ground plane (FTGP) device requiring ALD hermetic coatings was used as an example. Using a single defectmore » density value, it was determined that for an application with operation temperatures higher than 60 °C, the FTGP device would not be possible. The new probabilistic cluster model shows that up to 40.3% of the FTGP would be acceptable. With this new approach the manufacturing yield of ALD-enabled or other thin film based devices with different design configurations can be determined. It is important to guide process optimization and control and design for manufacturability.« less

  16. Development and validation of deterioration models for concrete bridge decks - phase 2 : mechanics-based degradation models.

    DOT National Transportation Integrated Search

    2013-06-01

    This report summarizes a research project aimed at developing degradation models for bridge decks in the state of Michigan based on durability mechanics. A probabilistic framework to implement local-level mechanistic-based models for predicting the c...

  17. Method and system for dynamic probabilistic risk assessment

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)

    2013-01-01

    The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.

  18. Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data

    NASA Astrophysics Data System (ADS)

    Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei

    2009-03-01

    We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.

  19. Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions

    NASA Astrophysics Data System (ADS)

    Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.

    2014-12-01

    One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.

  20. Probabilistic population projections with migration uncertainty

    PubMed Central

    Azose, Jonathan J.; Ševčíková, Hana; Raftery, Adrian E.

    2016-01-01

    We produce probabilistic projections of population for all countries based on probabilistic projections of fertility, mortality, and migration. We compare our projections to those from the United Nations’ Probabilistic Population Projections, which uses similar methods for fertility and mortality but deterministic migration projections. We find that uncertainty in migration projection is a substantial contributor to uncertainty in population projections for many countries. Prediction intervals for the populations of Northern America and Europe are over 70% wider, whereas prediction intervals for the populations of Africa, Asia, and the world as a whole are nearly unchanged. Out-of-sample validation shows that the model is reasonably well calibrated. PMID:27217571

  1. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  2. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  3. The influence of the free space environment on the superlight-weight thermal protection system: conception, methods, and risk analysis

    NASA Astrophysics Data System (ADS)

    Yatsenko, Vitaliy; Falchenko, Iurii; Fedorchuk, Viktor; Petrushynets, Lidiia

    2016-07-01

    This report focuses on the results of the EU project "Superlight-weight thermal protection system for space application (LIGHT-TPS)". The bottom line is an analysis of influence of the free space environment on the superlight-weight thermal protection system (TPS). This report focuses on new methods that based on the following models: synergetic, physical, and computational. This report concentrates on four approaches. The first concerns the synergetic approach. The synergetic approach to the solution of problems of self-controlled synthesis of structures and creation of self-organizing technologies is considered in connection with the super-problem of creation of materials with new functional properties. Synergetics methods and mathematical design are considered according to actual problems of material science. The second approach describes how the optimization methods can be used to determine material microstructures with optimized or targeted properties. This technique enables one to find unexpected microstructures with exotic behavior (e.g., negative thermal expansion coefficients). The third approach concerns the dynamic probabilistic risk analysis of TPS l elements with complex characterizations for damages using a physical model of TPS system and a predictable level of ionizing radiation and space weather. Focusing is given mainly on the TPS model, mathematical models for dynamic probabilistic risk assessment and software for the modeling and prediction of the influence of the free space environment. The probabilistic risk assessment method for TPS is presented considering some deterministic and stochastic factors. The last approach concerns results of experimental research of the temperature distribution on the surface of the honeycomb sandwich panel size 150 x 150 x 20 mm at the diffusion welding in vacuum are considered. An equipment, which provides alignment of temperature fields in a product for the formation of equal strength of welded joints is considered. Many tasks in computational materials science can be posed as optimization problems. This technique enables one to find unexpected microstructures with exotic behavior (e.g., negative thermal expansion coefficients). The last approach is concerned with the generation of realizations of materials with specified but limited microstructural information: an intriguing inverse problem of both fundamental and practical importance. Computational models based upon the theories of molecular dynamics or quantum mechanics would enable the prediction and modification of fundamental materials properties. This problem is solved using deterministic and stochastic optimization techniques. The main optimization approaches in the frame of the EU project "Superlight-weight thermal protection system for space application" are discussed. Optimization approach to the alloys for obtaining materials with required properties using modeling techniques and experimental data will be also considered. This report is supported by the EU project "Superlight-weight thermal protection system for space application (LIGHT-TPS)"

  4. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  5. Modeling marine oily wastewater treatment by a probabilistic agent-based approach.

    PubMed

    Jing, Liang; Chen, Bing; Zhang, Baiyu; Ye, Xudong

    2018-02-01

    This study developed a novel probabilistic agent-based approach for modeling of marine oily wastewater treatment processes. It begins first by constructing a probability-based agent simulation model, followed by a global sensitivity analysis and a genetic algorithm-based calibration. The proposed modeling approach was tested through a case study of the removal of naphthalene from marine oily wastewater using UV irradiation. The removal of naphthalene was described by an agent-based simulation model using 8 types of agents and 11 reactions. Each reaction was governed by a probability parameter to determine its occurrence. The modeling results showed that the root mean square errors between modeled and observed removal rates were 8.73 and 11.03% for calibration and validation runs, respectively. Reaction competition was analyzed by comparing agent-based reaction probabilities, while agents' heterogeneity was visualized by plotting their real-time spatial distribution, showing a strong potential for reactor design and process optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  7. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    NASA Astrophysics Data System (ADS)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  8. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    NASA Astrophysics Data System (ADS)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.

  9. Reconciling a geophysical model to data using a Markov chain Monte Carlo algorithm: An application to the Yellow Sea-Korean Peninsula region

    NASA Astrophysics Data System (ADS)

    Pasyanos, Michael E.; Franz, Gregory A.; Ramirez, Abelardo L.

    2006-03-01

    In an effort to build seismic models that are the most consistent with multiple data sets we have applied a new probabilistic inverse technique. This method uses a Markov chain Monte Carlo (MCMC) algorithm to sample models from a prior distribution and test them against multiple data types to generate a posterior distribution. While computationally expensive, this approach has several advantages over deterministic models, notably the seamless reconciliation of different data types that constrain the model, the proper handling of both data and model uncertainties, and the ability to easily incorporate a variety of prior information, all in a straightforward, natural fashion. A real advantage of the technique is that it provides a more complete picture of the solution space. By mapping out the posterior probability density function, we can avoid simplistic assumptions about the model space and allow alternative solutions to be identified, compared, and ranked. Here we use this method to determine the crust and upper mantle structure of the Yellow Sea and Korean Peninsula region. The model is parameterized as a series of seven layers in a regular latitude-longitude grid, each of which is characterized by thickness and seismic parameters (Vp, Vs, and density). We use surface wave dispersion and body wave traveltime data to drive the model. We find that when properly tuned (i.e., the Markov chains have had adequate time to fully sample the model space and the inversion has converged), the technique behaves as expected. The posterior model reflects the prior information at the edge of the model where there is little or no data to constrain adjustments, but the range of acceptable models is significantly reduced in data-rich regions, producing values of sediment thickness, crustal thickness, and upper mantle velocities consistent with expectations based on knowledge of the regional tectonic setting.

  10. Probabilistic modeling of percutaneous absorption for risk-based exposure assessments and transdermal drug delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford Kuofei

    Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less

  11. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    ERIC Educational Resources Information Center

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  12. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  13. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-11-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  14. Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations

    NASA Astrophysics Data System (ADS)

    Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-07-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  15. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  16. Bayesian inversions of a dynamic vegetation model in four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; François, L.

    2015-01-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.

  17. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  18. MATILDA Version 2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain - Part I

    DTIC Science & Technology

    2017-03-13

    support of airborne laser designator use during test and training exercises on military ranges. The initial MATILDA tool, MATILDA PRO Version-1.6.1...was based on the 2007 PRA model developed to perform range safety clearances for the UK Thermal Imaging Airborne Laser Designator (TIALD) system...AFRL Technical Reports. This Technical Report, designated Part I, con- tains documentation of the computational procedures for probabilistic fault

  19. Modeling and Inversion of three-dimensional crustal structures beneath the Pyrenees and their foreland basins based upon geological, gravimetric and seismological data

    NASA Astrophysics Data System (ADS)

    Spangenberg, Hannah; Chevrot, Sébastien; Courrioux, Gabriel; Guillen, Antonio

    2017-04-01

    Our goal is to obtain a three-dimensional (3D) model of mass density and seismic velocities beneath the Pyrenees and their foreland basins (Aquitaine and Ebro basins), which accounts for all the geological and geophysical information available for that region. This model covers the whole mountain range going from the Atlantic Ocean to the Mediterranean Sea, and from the Iberian range to the Massif Central. The model is described by different units: the lower, middle, and upper crusts, the accretionary prism, and the consolidated and unconsolidated sediment layers. Furthermore, a sub-continental, serpentinized European mantle is introduced to describe the exhumed mantle bodies which are responsible for the positive Bouguer gravity anomalies in the western Pyrenees. We build a first 3D model using all the geological information: drill-hole surveys, seismic sections, and the geological map. We use the potential field method implemented in Geomodeler to interpolate these geological data. However, these data are too sparse to build a model that explains seismic travel times or gravimetric data, especially the Labourd and the St. Gaudens Bouguer gravity anomalies. In addition, inconsistencies between the different data sets exist. We thus add by trial and error additional data points, comparing modeled and observed Bouguer gravimetric anomalies. The result of this procedure is a 3D geological model that respects the geological data and explains the measured Bouguer gravimetric anomalies. In a second step, we use this model to determine the average density and seismic velocities inside each geological unit assuming uniform layers. To constrain the seismic velocities we use travel time picks extracted from the bulletin of the Pyrenean seismicity released by the Observatoire Midi Pyrenées. In a third step, we use this 3D a priori model in a Monte Carlo inversion to invert jointly gravimetric data and seismic travel times from the bulletin. This probabilistic approach yields detailed information about the sedimentary foreland basins and the crustal structures beneath the Pyrenees. We will present and discuss different key steps of the construction of the 3D model of the Pyrenees. We will also compare selected cross-sections extracted from this model to the ECORS profiles, as well as CCP stacks of receiver functions along several PYROPE transects. Keywords: Pyrenees, 3D modeling, gravity, seismic tomography, joint inversion

  20. Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination

    NASA Technical Reports Server (NTRS)

    Groen, Frank

    2010-01-01

    This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.

  1. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  2. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  3. An analytical probabilistic model of the quality efficiency of a sewer tank

    NASA Astrophysics Data System (ADS)

    Balistrocchi, Matteo; Grossi, Giovanna; Bacchi, Baldassare

    2009-12-01

    The assessment of the efficiency of a storm water storage facility devoted to the sewer overflow control in urban areas strictly depends on the ability to model the main features of the rainfall-runoff routing process and the related wet weather pollution delivery. In this paper the possibility of applying the analytical probabilistic approach for developing a tank design method, whose potentials are similar to the continuous simulations, is proved. In the model derivation the quality issues of such devices were implemented. The formulation is based on a Weibull probabilistic model of the main characteristics of the rainfall process and on a power law describing the relationship between the dimensionless storm water cumulative runoff volume and the dimensionless cumulative pollutograph. Following this approach, efficiency indexes were established. The proposed model was verified by comparing its results to those obtained by continuous simulations; satisfactory agreement is shown for the proposed efficiency indexes.

  4. A Probabilistic Model of Meter Perception: Simulating Enculturation.

    PubMed

    van der Weij, Bastiaan; Pearce, Marcus T; Honing, Henkjan

    2017-01-01

    Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.

  5. Probability Based hERG Blocker Classifiers.

    PubMed

    Wang, Zhi; Mussa, Hamse Y; Lowe, Robert; Glen, Robert C; Yan, Aixia

    2012-09-01

    The US Food and Drug Administration (FDA) require in vitro human ether-a-go-go related (hERG) ion channel affinity tests for all drug candidates prior to clinical trials. In this study, probabilistic-based methods were employed to develop prediction models on hERG inhibition prediction, which are different from traditional QSAR models that are mainly based on supervised 'hard point' (HP) classification approaches giving 'yes/no' answers. The obtained models can 'ascertain' whether or not a given set of compounds can block hERG ion channels. The results presented indicate that the proposed probabilistic-based method can be a valuable tool for ranking compounds with respect to their potential cardio-toxicity and will be promising for other toxic property predictions. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    NASA Astrophysics Data System (ADS)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.

    2015-10-01

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.

  7. How might Model-based Probabilities Extracted from Imperfect Models Guide Rational Decisions: The Case for non-probabilistic odds

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.

    2010-05-01

    This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask whether or not probabilities based on imperfect models can be expected to yield probabilistic odds which are sustainable. Evidence is provided that suggest this is not the case. Even with very good models (good in an Root-Mean-Square sense), the risk of ruin of probabilistic odds is significantly higher than might be expected. Methods for constructing model-based non-probabilistic odds which are sustainable are discussed. The aim here is to be relevant to real world decision support, and so unrealistic assumptions of equal knowledge, equal compute power, or equal access to information are to be avoided. Finally, the use of non-probabilistic odds as a method for communicating deep uncertainty (uncertainty in a probability forecast itself) is discussed in the context of other methods, such as stating one's subjective probability that the models will prove inadequate in each particular instance (that is, the Probability of a "Big Surprise").

  8. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  9. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  10. Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations

    NASA Technical Reports Server (NTRS)

    Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.

    2013-01-01

    Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.

  11. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  12. Exploiting virtual sediment deposits to explore conceptual foundations

    NASA Astrophysics Data System (ADS)

    Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian

    2017-04-01

    Geomorphic concepts and hypotheses are usually formulated based on empiric data from the field or the laboratory (deduction). After translation into models they can be applied to case study scenarios (induction). However, the other way around - expressing hypotheses explicitly by models and test these by empiric data - is a rarely touched trail. There are several models tailored to investigate the boundary conditions and processes that generate, mobilise, route and eventually deposit sediment in a landscape. Thereby, the last part, sediment deposition, is usually omitted. Essentially, there is no model that explicitly focuses on mapping out the characteristics of sedimentary deposits - the material that is used by many disciplines to reconstruct landscape evolution. This contribution introduces the R-package sandbox, a model framework that allows creating and analysing virtual sediment sections for exploratory, explanatory, forecasting and inverse research questions. The R-package sandbox is a probabilistic and rule-based model framework for a wide range of possible applications. The model framework is used here to discuss a set of conceptual questions revolving around geochemical and geochronological methods, such as: How does sample size and sample volume affect age uncertainty? What determines the robustness of sediment fingerprinting results? How does the prepared grain size of the material of interest affect the analysis outcomes? Most of the concepts used in geosciences are underpinned by a set of assumptions, whose robustness and boundary conditions need to be assessed quantitatively. The R-package sandbox is a universal and flexible tool to engage with this challenge.

  13. New ShakeMaps for Georgia Resulting from Collaboration with EMME

    NASA Astrophysics Data System (ADS)

    Kvavadze, N.; Tsereteli, N. S.; Varazanashvili, O.; Alania, V.

    2015-12-01

    Correct assessment of probabilistic seismic hazard and risks maps are first step for advance planning and action to reduce seismic risk. Seismic hazard maps for Georgia were calculated based on modern approach that was developed in the frame of EMME (Earthquake Modl for Middle east region) project. EMME was one of GEM's successful endeavors at regional level. With EMME and GEM assistance, regional models were analyzed to identify the information and additional work needed for the preparation national hazard models. Probabilistic seismic hazard map (PSH) provides the critical bases for improved building code and construction. The most serious deficiency in PSH assessment for the territory of Georgia is the lack of high-quality ground motion data. Due to this an initial hybrid empirical ground motion model is developed for PGA and SA at selected periods. An application of these coefficients for ground motion models have been used in probabilistic seismic hazard assessment. Obtained results of seismic hazard maps show evidence that there were gaps in seismic hazard assessment and the present normative seismic hazard map needed a careful recalculation.

  14. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less computation power. The authors have used this approach for risk assessment towards identification of effectiveness-profitability of risk mitigation measures, using optimization model for resource allocation. Based on the error-computation trade-off, 62-earthquake scenarios are chosen to be used for this purpose.

  15. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  16. A ligand predication tool based on modeling and reasoning with imprecise probabilistic knowledge.

    PubMed

    Liu, Weiru; Yue, Anbu; Timson, David J

    2010-04-01

    Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool. 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  18. Electrical conductivity structure of the mantle derived from inversion of geomagnetic observatory data: implications for lateral variations in temperature, composition and water content.

    NASA Astrophysics Data System (ADS)

    Munch, Federico; Grayver, Alexander; Khan, Amir; Kuvshinov, Alexey

    2017-04-01

    As most of Earth's interior remains geochemically unsampled, geophysical techniques based on seismology, geodesy, gravimetry, and electromagnetic studies play prominent roles because of their ability to sense structure at depth. Although seismic tomography maps show a variety of structures, separating thermal and compositional contributions from seismic velocities alone still remains a challenging task. Alternatively, as electrical conductivity is sensitive to temperature, chemical composition, oxygen fugacity, water content, and the presence of melt, it can serve for determining chemistry, mineralogy, and physical structure of the deep mantle. In this work we estimate and invert local C-responses (period range 3-100 days) for a number of worldwide geomagnetic observatories to map lateral variations of electrical conductivity in Earth's mantle (400-1600 km depth). The obtained conductivity profiles are interpreted in terms of basalt fraction in a basalt-harzburgite mixture, temperature structure, and water content variations. Interpretation is based on a self-consistent thermodynamic calculation of mineral phase equilibria, electrical conductivity databases, and probabilistic inverse methods.

  19. A DATA-DRIVEN MODEL FOR SPECTRA: FINDING DOUBLE REDSHIFTS IN THE SLOAN DIGITAL SKY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsalmantza, P.; Hogg, David W., E-mail: vivitsal@mpia.de

    2012-07-10

    We present a data-driven method-heteroscedastic matrix factorization, a kind of probabilistic factor analysis-for modeling or performing dimensionality reduction on observed spectra or other high-dimensional data with known but non-uniform observational uncertainties. The method uses an iterative inverse-variance-weighted least-squares minimization procedure to generate a best set of basis functions. The method is similar to principal components analysis (PCA), but with the substantial advantage that it uses measurement uncertainties in a responsible way and accounts naturally for poorly measured and missing data; it models the variance in the noise-deconvolved data space. A regularization can be applied, in the form of a smoothnessmore » prior (inspired by Gaussian processes) or a non-negative constraint, without making the method prohibitively slow. Because the method optimizes a justified scalar (related to the likelihood), the basis provides a better fit to the data in a probabilistic sense than any PCA basis. We test the method on Sloan Digital Sky Survey (SDSS) spectra, concentrating on spectra known to contain two redshift components: these are spectra of gravitational lens candidates and massive black hole binaries. We apply a hypothesis test to compare one-redshift and two-redshift models for these spectra, utilizing the data-driven model trained on a random subset of all SDSS spectra. This test confirms 129 of the 131 lens candidates in our sample and all of the known binary candidates, and turns up very few false positives.« less

  20. Correlation between Relatives given Complete Genotypes: from Identity by Descent to Identity by Function

    PubMed Central

    Sverdlov, Serge; Thompson, Elizabeth A.

    2013-01-01

    In classical quantitative genetics, the correlation between the phenotypes of individuals with unknown genotypes and a known pedigree relationship is expressed in terms of probabilities of IBD states. In existing approaches to the inverse problem where genotypes are observed but pedigree relationships are not, dependence between phenotypes is either modeled as Bayesian uncertainty or mapped to an IBD model via inferred relatedness parameters. Neither approach yields a relationship between genotypic similarity and phenotypic similarity with a probabilistic interpretation corresponding to a generative model. We introduce a generative model for diploid allele effect based on the classic infinite allele mutation process. This approach motivates the concept of IBF (Identity by Function). The phenotypic covariance between two individuals given their diploid genotypes is expressed in terms of functional identity states. The IBF parameters define a genetic architecture for a trait without reference to specific alleles or population. Given full genome sequences, we treat a gene-scale functional region, rather than a SNP, as a QTL, modeling patterns of dominance for multiple alleles. Applications demonstrated by simulation include phenotype and effect prediction and association, and estimation of heritability and classical variance components. A simulation case study of the Missing Heritability problem illustrates a decomposition of heritability under the IBF framework into Explained and Unexplained components. PMID:23851163

  1. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  2. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  3. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    NASA Astrophysics Data System (ADS)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more accurate measure of forecast uncertainty that could result in better decision-making. It offers different level of abstractions to help with the recalibration of the RAR method. It also has an inspection tool that displays the selected analogs, their observations and statistical data. It gives the users access to inner parts of the method, unveiling hidden information. References [GR05] GNEITING T., RAFTERY A. E.: Weather forecasting with ensemble methods. Science 310, 5746, 248-249, 2005. [KAL03] KALNAY E.: Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2003. [PH06] PALMER T., HAGEDORN R.: Predictability of weather and climate. Cambridge University Press, 2006. [HW06] HAMILL T. M., WHITAKER J. S.: Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Monthly Weather Review 134, 11, 3209-3229, 2006. [DE06] DEITRICK S., EDSALL R.: The influence of uncertainty visualization on decision making: An empirical evaluation. Springer, 2006. [KMS08] KEIM D. A., MANSMANN F., SCHNEIDEWIND J., THOMAS J., ZIEGLER H.: Visual analytics: Scope and challenges. Springer, 2008.

  4. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    NASA Astrophysics Data System (ADS)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  5. Sensitivity analyses of acoustic impedance inversion with full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Yao, Gang; da Silva, Nuno V.; Wu, Di

    2018-04-01

    Acoustic impedance estimation has a significant importance to seismic exploration. In this paper, we use full-waveform inversion to recover the impedance from seismic data, and analyze the sensitivity of the acoustic impedance with respect to the source-receiver offset of seismic data and to the initial velocity model. We parameterize the acoustic wave equation with velocity and impedance, and demonstrate three key aspects of acoustic impedance inversion. First, short-offset data are most suitable for acoustic impedance inversion. Second, acoustic impedance inversion is more compatible with the data generated by density contrasts than velocity contrasts. Finally, acoustic impedance inversion requires the starting velocity model to be very accurate for achieving a high-quality inversion. Based upon these observations, we propose a workflow for acoustic impedance inversion as: (1) building a background velocity model with travel-time tomography or reflection waveform inversion; (2) recovering the intermediate wavelength components of the velocity model with full-waveform inversion constrained by Gardner’s relation; (3) inverting the high-resolution acoustic impedance model with short-offset data through full-waveform inversion. We verify this workflow by the synthetic tests based on the Marmousi model.

  6. Semi-active control of magnetorheological elastomer base isolation system utilising learning-based inverse model

    NASA Astrophysics Data System (ADS)

    Gu, Xiaoyu; Yu, Yang; Li, Jianchun; Li, Yancheng

    2017-10-01

    Magnetorheological elastomer (MRE) base isolations have attracted considerable attention over the last two decades thanks to its self-adaptability and high-authority controllability in semi-active control realm. Due to the inherent nonlinearity and hysteresis of the devices, it is challenging to obtain a reasonably complicated mathematical model to describe the inverse dynamics of MRE base isolators and hence to realise control synthesis of the MRE base isolation system. Two aims have been achieved in this paper: i) development of an inverse model for MRE base isolator based on optimal general regression neural network (GRNN); ii) numerical and experimental validation of a real-time semi-active controlled MRE base isolation system utilising LQR controller and GRNN inverse model. The superiority of GRNN inverse model lays in fewer input variables requirement, faster training process and prompt calculation response, which makes it suitable for online training and real-time control. The control system is integrated with a three-storey shear building model and control performance of the MRE base isolation system is compared with bare building, passive-on isolation system and passive-off isolation system. Testing results show that the proposed GRNN inverse model is able to reproduce desired control force accurately and the MRE base isolation system can effectively suppress the structural responses when compared to the passive isolation system.

  7. Probabilistic Scenario-based Seismic Risk Analysis for Critical Infrastructures Method and Application for a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Klügel, J.

    2006-12-01

    Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.

  8. Probabilistic grammatical model for helix‐helix contact site classification

    PubMed Central

    2013-01-01

    Background Hidden Markov Models power many state‐of‐the‐art tools in the field of protein bioinformatics. While excelling in their tasks, these methods of protein analysis do not convey directly information on medium‐ and long‐range residue‐residue interactions. This requires an expressive power of at least context‐free grammars. However, application of more powerful grammar formalisms to protein analysis has been surprisingly limited. Results In this work, we present a probabilistic grammatical framework for problem‐specific protein languages and apply it to classification of transmembrane helix‐helix pairs configurations. The core of the model consists of a probabilistic context‐free grammar, automatically inferred by a genetic algorithm from only a generic set of expert‐based rules and positive training samples. The model was applied to produce sequence based descriptors of four classes of transmembrane helix‐helix contact site configurations. The highest performance of the classifiers reached AUCROC of 0.70. The analysis of grammar parse trees revealed the ability of representing structural features of helix‐helix contact sites. Conclusions We demonstrated that our probabilistic context‐free framework for analysis of protein sequences outperforms the state of the art in the task of helix‐helix contact site classification. However, this is achieved without necessarily requiring modeling long range dependencies between interacting residues. A significant feature of our approach is that grammar rules and parse trees are human‐readable. Thus they could provide biologically meaningful information for molecular biologists. PMID:24350601

  9. Probabilistic Priority Message Checking Modeling Based on Controller Area Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Although the probabilistic model checking tool called PRISM has been applied in many communication systems, such as wireless local area network, Bluetooth, and ZigBee, the technique is not used in a controller area network (CAN). In this paper, we use PRISM to model the mechanism of priority messages for CAN because the mechanism has allowed CAN to become the leader in serial communication for automobile and industry control. Through modeling CAN, it is easy to analyze the characteristic of CAN for further improving the security and efficiency of automobiles. The Markov chain model helps us to model the behaviour of priority messages.

  10. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    PubMed

    Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  11. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning

    PubMed Central

    Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366

  12. Seismic hazard in the Intermountain West

    USGS Publications Warehouse

    Haller, Kathleen; Moschetti, Morgan P.; Mueller, Charles; Rezaeian, Sanaz; Petersen, Mark D.; Zeng, Yuehua

    2015-01-01

    The 2014 national seismic-hazard model for the conterminous United States incorporates new scientific results and important model adjustments. The current model includes updates to the historical catalog, which is spatially smoothed using both fixed-length and adaptive-length smoothing kernels. Fault-source characterization improved by adding faults, revising rates of activity, and incorporating new results from combined inversions of geologic and geodetic data. The update also includes a new suite of published ground motion models. Changes in probabilistic ground motion are generally less than 10% in most of the Intermountain West compared to the prior assessment, and ground-motion hazard in four Intermountain West cities illustrates the range and magnitude of change in the region. Seismic hazard at reference sites in Boise and Reno increased as much as 10%, whereas hazard in Salt Lake City decreased 5–6%. The largest change was in Las Vegas, where hazard increased 32–35%.

  13. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling

    PubMed Central

    Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139

  14. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    PubMed

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  15. Sensitivity of airborne geophysical data to sublacustrine permafrost thaw

    NASA Astrophysics Data System (ADS)

    Minsley, B. J.; Wellman, T. P.; Walvoord, M. A.; Revil, A.

    2014-12-01

    A coupled hydrogeophysical forward and inverse modeling approach is developed to illustrate the ability of frequency-domain airborne electromagnetic (AEM) data to characterize subsurface physical properties associated with sublacustrine permafrost thaw during lake talik formation. Several scenarios are evaluated that consider the response to variable hydrologic forcing from different lake depths and hydrologic gradients. The model includes a physical property relationship that connects the dynamic distribution of subsurface electrical resistivity based on lithology as well as ice-saturation and temperature outputs from the SUTRA groundwater simulator with freeze/thaw physics. Electrical resistivity models are used to simulate AEM data in order to explore the sensitivity of geophysical observations to permafrost thaw. Simulations of sublacustrine talik formation over a 1000 year period modeled after conditions found in the Yukon Flats, Alaska, are evaluated. Synthetic geophysical data are analyzed with a Bayesian Markov chain Monte Carlo algorithm that provides a probabilistic assessment of geophysical model uncertainty and resolution. Major lithological and permafrost features are well resolved in the examples considered. The subtle geometry of partial ice-saturation beneath lakes during talik formation cannot be resolved using AEM data, but the gross characteristics of sub-lake resistivity models reflect bulk changes in ice content and can be used to determine the presence of a talik. A final example compares AEM and ground-based electromagnetic responses for their ability to resolve shallow permafrost and thaw features in the upper 1-2 m below ground.

  16. Using expected sequence features to improve basecalling accuracy of amplicon pyrosequencing data.

    PubMed

    Rask, Thomas S; Petersen, Bent; Chen, Donald S; Day, Karen P; Pedersen, Anders Gorm

    2016-04-22

    Amplicon pyrosequencing targets a known genetic region and thus inherently produces reads highly anticipated to have certain features, such as conserved nucleotide sequence, and in the case of protein coding DNA, an open reading frame. Pyrosequencing errors, consisting mainly of nucleotide insertions and deletions, are on the other hand likely to disrupt open reading frames. Such an inverse relationship between errors and expectation based on prior knowledge can be used advantageously to guide the process known as basecalling, i.e. the inference of nucleotide sequence from raw sequencing data. The new basecalling method described here, named Multipass, implements a probabilistic framework for working with the raw flowgrams obtained by pyrosequencing. For each sequence variant Multipass calculates the likelihood and nucleotide sequence of several most likely sequences given the flowgram data. This probabilistic approach enables integration of basecalling into a larger model where other parameters can be incorporated, such as the likelihood for observing a full-length open reading frame at the targeted region. We apply the method to 454 amplicon pyrosequencing data obtained from a malaria virulence gene family, where Multipass generates 20 % more error-free sequences than current state of the art methods, and provides sequence characteristics that allow generation of a set of high confidence error-free sequences. This novel method can be used to increase accuracy of existing and future amplicon sequencing data, particularly where extensive prior knowledge is available about the obtained sequences, for example in analysis of the immunoglobulin VDJ region where Multipass can be combined with a model for the known recombining germline genes. Multipass is available for Roche 454 data at http://www.cbs.dtu.dk/services/MultiPass-1.0 , and the concept can potentially be implemented for other sequencing technologies as well.

  17. Spatial distribution of intrinsic and scattering seismic attenuation in active volcanic islands - I: model and the case of Tenerife Island

    NASA Astrophysics Data System (ADS)

    Prudencio, Janire; Del Pezzo, Edoardo; García-Yeguas, Araceli; Ibáñez, Jesús M.

    2013-12-01

    The complex volcanic system of Tenerife Island is known to have a highly heterogeneous character, as recently confirmed by velocity tomography. We present new information derived from intrinsic quality factor inverse maps (Qi-1), scattering quality factor inverse maps (Qs-1) and total quality factor inverse maps (Qt-1) obtained for the same region. The data set used in this work is the result of the analysis of an active seismic experiment carried out, using offshore shots (air guns) recorded at over 85 onshore seismic stations. The estimates of the attenuation parameters are based on the assumption that the seismogram energy envelopes are determined by seismic energy diffusion processes occurring inside the island. Diffusion model parameters, proportional to Qi-1 and to Qs-1, are estimated from the inversion of the energy envelopes for any source-receiver couple. They are then weighted with a new graphical approach based on a Gaussian space probability function, which allowed us to create `2-D probabilistic maps' representing the space distribution of the attenuation parameters. The 2-D images obtained reveal the existence of a zone in the centre of the island characterized by the lowest attenuation effects. This effect is interpreted as highly rigid and cooled rocks. This low-attenuation region is bordered by zones of high attenuation, associated with the recent historical volcanic activity. We calculate the transport mean free path obtaining a value of around 4 km for the frequency range 6-12 Hz. This result is two orders of magnitude smaller than values calculated for the crust of the Earth. An absorption length between 10 and 14 km is associated with the average intrinsic attenuation parameter. These values, while small in the context of tectonic regions, are greater than those obtained in volcanic regions such as Vesuvius or Merapi. Such differences may be explained by the magnitude of the region of study, over three times larger than the aforementioned study areas. This also implies deeper sampling of the crust, which is evidenced by a change in the values of seismic attenuation. One important observation is that scattering attenuation dominates over the intrinsic effects, Qi being at least twice the value of Qs.

  18. Interacting with an artificial partner: modeling the role of emotional aspects.

    PubMed

    Cattinelli, Isabella; Goldwurm, Massimiliano; Borghese, N Alberto

    2008-12-01

    In this paper we introduce a simple model based on probabilistic finite state automata to describe an emotional interaction between a robot and a human user, or between simulated agents. Based on the agent's personality, attitude, and nature, and on the emotional inputs it receives, the model will determine the next emotional state displayed by the agent itself. The probabilistic and time-varying nature of the model yields rich and dynamic interactions, and an autonomous adaptation to the interlocutor. In addition, a reinforcement learning technique is applied to have one agent drive its partner's behavior toward desired states. The model may also be used as a tool for behavior analysis, by extracting high probability patterns of interaction and by resorting to the ergodic properties of Markov chains.

  19. Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model

    NASA Astrophysics Data System (ADS)

    Urban, Nathan M.; Keller, Klaus

    2010-10-01

    How has the Atlantic Meridional Overturning Circulation (AMOC) varied over the past centuries and what is the risk of an anthropogenic AMOC collapse? We report probabilistic projections of the future climate which improve on previous AMOC projection studies by (i) greatly expanding the considered observational constraints and (ii) carefully sampling the tail areas of the parameter probability distribution function (pdf). We use a Bayesian inversion to constrain a simple model of the coupled climate, carbon cycle and AMOC systems using observations to derive multicentury hindcasts and projections. Our hindcasts show considerable skill in representing the observational constraints. We show that robust AMOC risk estimates can require carefully sampling the parameter pdfs. We find a low probability of experiencing an AMOC collapse within the 21st century for a business-as-usual emissions scenario. The probability of experiencing an AMOC collapse within two centuries is 1/10. The probability of crossing a forcing threshold and triggering a future AMOC collapse (by 2300) is approximately 1/30 in the 21st century and over 1/3 in the 22nd. Given the simplicity of the model structure and uncertainty in the forcing assumptions, our analysis should be considered a proof of concept and the quantitative conclusions subject to severe caveats.

  20. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multifactor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  1. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multi-factor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  2. A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie

    Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less

  3. Simulation investigation of multipactor in metal components for space application with an improved secondary emission model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yun, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Cui, Wan-Zhao, E-mail: genliyun@126.com, E-mail: cuiwanzhao@126.com; Wang, Hong-Guang

    2015-05-15

    Effects of the secondary electron emission (SEE) phenomenon of metal surface on the multipactor analysis of microwave components are investigated numerically and experimentally in this paper. Both the secondary electron yield (SEY) and the emitted energy spectrum measurements are performed on silver plated samples for accurate description of the SEE phenomenon. A phenomenological probabilistic model based on SEE physics is utilized and fitted accurately to the measured SEY and emitted energy spectrum of the conditioned surface material of microwave components. Specially, the phenomenological probabilistic model is extended to the low primary energy end lower than 20 eV mathematically, since no accuratemore » measurement data can be obtained. Embedding the phenomenological probabilistic model into the Electromagnetic Particle-In-Cell (EM-PIC) method, the electronic resonant multipacting in microwave components can be tracked and hence the multipactor threshold can be predicted. The threshold prediction error of the transformer and the coaxial filter is 0.12 dB and 1.5 dB, respectively. Simulation results demonstrate that the discharge threshold is strongly dependent on the SEYs and its energy spectrum in the low energy end (lower than 50 eV). Multipacting simulation results agree quite well with experiments in practical components, while the phenomenological probabilistic model fit both the SEY and the emission energy spectrum better than the traditionally used model and distribution. The EM-PIC simulation method with the phenomenological probabilistic model for the surface collision simulation has been demonstrated for predicting the multipactor threshold in metal components for space application.« less

  4. Neural-Based Compensation of Nonlinearities in an Airplane Longitudinal Model with Dynamic-Inversion Control

    PubMed Central

    Li, YuHui; Jin, FeiTeng

    2017-01-01

    The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680

  5. Improving Conceptual Models Using AEM Data and Probability Distributions

    NASA Astrophysics Data System (ADS)

    Davis, A. C.; Munday, T. J.; Christensen, N. B.

    2012-12-01

    With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the main diagonal is used. Complications arise when we ask more specific questions, such as "What is the probability that the resistivity of layer 2 <= x, given that layer 1 <= y?" The probability then becomes conditional, calculation includes covariance terms, the integration is taken over many dimensions, and the cross-correlation of parameters becomes important. To illustrate, we examine the inversion results of a Tempest AEM survey over the Uley Basin aquifers in the Eyre Peninsula, South Australia. Key aquifers include the unconfined Bridgewater Formation that overlies the Uley and Wanilla Formations, which contain Tertiary clays and Tertiary sandstone. These Formations overlie weathered basement which define the lower bound of the Uley Basin aquifer systems. By correlating the conductivity of the sub-surface Formation types, we pose questions such as: "What is the probability-depth of the Bridgewater Formation in the Uley South Basin?", "What is the thickness of the Uley Formation?" and "What is the most probable depth to basement?" We use these questions to generate improved conceptual hydrogeological models of the Uley Basin in order to develop better estimates of aquifer extent and the available groundwater resource.

  6. A unified probabilistic framework for spontaneous facial action modeling and understanding.

    PubMed

    Tong, Yan; Chen, Jixu; Ji, Qiang

    2010-02-01

    Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.

  7. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.

    PubMed

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir

    2015-09-01

    With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.

  8. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  9. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks.

    PubMed

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.

  10. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks

    PubMed Central

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110

  11. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  12. Modeling analysis of pulsed magnetization process of magnetic core based on inverse Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang

    2018-05-01

    The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.

  13. GBIS (Geodetic Bayesian Inversion Software): Rapid Inversion of InSAR and GNSS Data to Estimate Surface Deformation Source Parameters and Uncertainties

    NASA Astrophysics Data System (ADS)

    Bagnardi, M.; Hooper, A. J.

    2017-12-01

    Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform slip, embedded in a isotropic elastic half-space. However, the software architecture allows the user to easily add any other analytical or numerical forward models to calculate displacements at the surface. GBIS is delivered with a detailed user manual and three synthetic datasets for testing and practical training.

  14. Inversion of Robin coefficient by a spectral stochastic finite element approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Bangti; Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  15. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  16. A Proposed Probabilistic Extension of the Halpern and Pearl Definition of ‘Actual Cause’

    PubMed Central

    2017-01-01

    ABSTRACT Joseph Halpern and Judea Pearl ([2005]) draw upon structural equation models to develop an attractive analysis of ‘actual cause’. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation. 1Introduction2Preemption3Structural Equation Models4The Halpern and Pearl Definition of ‘Actual Cause’5Preemption Again6The Probabilistic Case7Probabilistic Causal Models8A Proposed Probabilistic Extension of Halpern and Pearl’s Definition9Twardy and Korb’s Account10Probabilistic Fizzling11Conclusion PMID:29593362

  17. A Probabilistic Model of Social Working Memory for Information Retrieval in Social Interactions.

    PubMed

    Li, Liyuan; Xu, Qianli; Gan, Tian; Tan, Cheston; Lim, Joo-Hwee

    2018-05-01

    Social working memory (SWM) plays an important role in navigating social interactions. Inspired by studies in psychology, neuroscience, cognitive science, and machine learning, we propose a probabilistic model of SWM to mimic human social intelligence for personal information retrieval (IR) in social interactions. First, we establish a semantic hierarchy as social long-term memory to encode personal information. Next, we propose a semantic Bayesian network as the SWM, which integrates the cognitive functions of accessibility and self-regulation. One subgraphical model implements the accessibility function to learn the social consensus about IR-based on social information concept, clustering, social context, and similarity between persons. Beyond accessibility, one more layer is added to simulate the function of self-regulation to perform the personal adaptation to the consensus based on human personality. Two learning algorithms are proposed to train the probabilistic SWM model on a raw dataset of high uncertainty and incompleteness. One is an efficient learning algorithm of Newton's method, and the other is a genetic algorithm. Systematic evaluations show that the proposed SWM model is able to learn human social intelligence effectively and outperforms the baseline Bayesian cognitive model. Toward real-world applications, we implement our model on Google Glass as a wearable assistant for social interaction.

  18. A Model Counting Characterization of Diagnoses

    DTIC Science & Technology

    2002-05-04

    Hamscher et al., 1992] Hamscher W., Console L., and de [ Shanahan , 1993] Shanahan M. Explanation in the Situa- Kleer J. Readings in Model-Based Diagnosis...1998] Kohlas J., Anrig B., Haenni R., and ing: Diagnosis and Learning, Southampton, 1988. Monney P. A. Model-Based Diagnosis and Probabilistic [Struss

  19. Propagating Water Quality Analysis Uncertainty Into Resource Management Decisions Through Probabilistic Modeling

    NASA Astrophysics Data System (ADS)

    Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.

    2007-12-01

    Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different management decisions. Our research results indicate that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our research also indicates that the probability of violating current water quality guidelines at specified true fecal coliform concentrations depends on the laboratory procedure used. As a result, quality-based management decisions, such as opening or closing a shellfishing area, may also depend on the laboratory procedure used.

  20. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  1. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  2. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    USGS Publications Warehouse

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  3. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    PubMed

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  < 10 -20 ) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., "critical care," "pneumonia," "neurologic evaluation"). Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  4. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets

    PubMed Central

    Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-01-01

    Objective: Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. Materials and Methods: The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Results: Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% (P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). Discussion: Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Conclusion: Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. PMID:27655861

  5. Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foye, Kevin C.; Soong, Te-Yang

    2012-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the wastemore » mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this specific example, relative density, which can be determined through field measurements, was selected as the field quality control parameter for waste placement. This technique can be extended to include a rigorous performance-based methodology using other parameters (void space criteria, debris-soil mix ratio, pre-loading, etc.). As shown in this example, each parameter range, or sets of parameter ranges can be selected such that they can result in an acceptable, long-term differential settlement according to the probabilistic model. The methodology can also be used to re-evaluate the long-term differential settlement behavior at closed land disposal facilities to identify, if any, problematic facilities so that remedial action (e.g., reinforcement of upper and intermediate waste layers) can be implemented. Considering the inherent spatial variability in waste and earth materials and the need for engineers to apply sound quantitative practices to engineering analysis, it is important to apply the available probabilistic techniques to problems of differential settlement. One such method to implement probability-based differential settlement analyses for the design of landfill final covers has been presented. The design evaluation technique presented is one tool to bridge the gap from deterministic practice to probabilistic practice. (authors)« less

  6. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Li, Weixuan; Zeng, Lingzao

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less

  7. A Joint Gaussian Process Model for Active Visual Recognition with Expertise Estimation in Crowdsourcing

    PubMed Central

    Long, Chengjiang; Hua, Gang; Kapoor, Ashish

    2015-01-01

    We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892

  8. Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics.

    PubMed

    Hattori, Masasi

    2016-12-01

    This paper presents a new theory of syllogistic reasoning. The proposed model assumes there are probabilistic representations of given signature situations. Instead of conducting an exhaustive search, the model constructs an individual-based "logical" mental representation that expresses the most probable state of affairs, and derives a necessary conclusion that is not inconsistent with the model using heuristics based on informativeness. The model is a unification of previous influential models. Its descriptive validity has been evaluated against existing empirical data and two new experiments, and by qualitative analyses based on previous empirical findings, all of which supported the theory. The model's behavior is also consistent with findings in other areas, including working memory capacity. The results indicate that people assume the probabilities of all target events mentioned in a syllogism to be almost equal, which suggests links between syllogistic reasoning and other areas of cognition. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  9. A probabilistic approach to aircraft design emphasizing stability and control uncertainties

    NASA Astrophysics Data System (ADS)

    Delaurentis, Daniel Andrew

    In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.

  10. Probabilistic Solution of Inverse Problems.

    DTIC Science & Technology

    1985-09-01

    AODRESSIl differentI from Conat.oildun 0111C*) It. SECURITY CLASS (ofll ~e vport) Office of Naval Research UCASFE Information Systems ...report describes research done within the Laboratory for Information and Decision Systems and the Artificial Intelligence Laboratory at the Massachusetts...analysis of systems endowed with perceptual abilities is the construction of internal representations of the physical structures in the external world

  11. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio

    2015-09-15

    Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less

  12. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  13. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  14. The Diagnostic Challenge Competition: Probabilistic Techniques for Fault Diagnosis in Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Ricks, Brian W.; Mengshoel, Ole J.

    2009-01-01

    Reliable systems health management is an important research area of NASA. A health management system that can accurately and quickly diagnose faults in various on-board systems of a vehicle will play a key role in the success of current and future NASA missions. We introduce in this paper the ProDiagnose algorithm, a diagnostic algorithm that uses a probabilistic approach, accomplished with Bayesian Network models compiled to Arithmetic Circuits, to diagnose these systems. We describe the ProDiagnose algorithm, how it works, and the probabilistic models involved. We show by experimentation on two Electrical Power Systems based on the ADAPT testbed, used in the Diagnostic Challenge Competition (DX 09), that ProDiagnose can produce results with over 96% accuracy and less than 1 second mean diagnostic time.

  15. 2D Unstructured Grid Based Constrained Inversion of Magnetic Data Using Fuzzy C Means Clustering and Lithology Classification

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Singh, A.; Sharma, S. P.

    2016-12-01

    Regular grid discretization is often utilized to define complex geological models. However, this subdivision strategy performs at lower precision to represent the topographical observation surface. We have developed a new 2D unstructured grid based inversion for magnetic data for models including topography. It will consolidate prior parametric information into a deterministic inversion system to enhance the boundary between the different lithology based on recovered magnetic susceptibility distribution from the inversion. The presented susceptibility model will satisfy both the observed magnetic data and parametric information and therefore can represent the earth better than geophysical inversion models that only honor the observed magnetic data. Geophysical inversion and lithology classification are generally treated as two autonomous methodologies and connected in a serial way. The presented inversion strategy integrates these two parts into a unified scheme. To reduce the storage space and computation time, the conjugate gradient method is used. It results in feasible and practical imaging inversion of magnetic data to deal with large number of triangular grids. The efficacy of the presented inversion is demonstrated using two synthetic examples and one field data example.

  16. Probabilistic regional climate projection in Japan using a regression model with CMIP5 multi-model ensemble experiments

    NASA Astrophysics Data System (ADS)

    Ishizaki, N. N.; Dairaku, K.; Ueno, G.

    2016-12-01

    We have developed a statistical downscaling method for estimating probabilistic climate projection using CMIP5 multi general circulation models (GCMs). A regression model was established so that the combination of weights of GCMs reflects the characteristics of the variation of observations at each grid point. Cross validations were conducted to select GCMs and to evaluate the regression model to avoid multicollinearity. By using spatially high resolution observation system, we conducted statistically downscaled probabilistic climate projections with 20-km horizontal grid spacing. Root mean squared errors for monthly mean air surface temperature and precipitation estimated by the regression method were the smallest compared with the results derived from a simple ensemble mean of GCMs and a cumulative distribution function based bias correction method. Projected changes in the mean temperature and precipitation were basically similar to those of the simple ensemble mean of GCMs. Mean precipitation was generally projected to increase associated with increased temperature and consequent increased moisture content in the air. Weakening of the winter monsoon may affect precipitation decrease in some areas. Temperature increase in excess of 4 K was expected in most areas of Japan in the end of 21st century under RCP8.5 scenario. The estimated probability of monthly precipitation exceeding 300 mm would increase around the Pacific side during the summer and the Japan Sea side during the winter season. This probabilistic climate projection based on the statistical method can be expected to bring useful information to the impact studies and risk assessments.

  17. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  18. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks.

    PubMed

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-02-08

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency.

  19. Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval

    PubMed Central

    Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene

    2018-01-01

    Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie PMID:29688379

  20. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-01-01

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm. PMID:28587084

  1. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors.

    PubMed

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-05-25

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ -connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.

  2. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students

    NASA Astrophysics Data System (ADS)

    Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi

    2017-08-01

    Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.

  3. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less

  4. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  5. Global/local methods for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Wu, Y.-T.

    1993-01-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  6. Global/local methods for probabilistic structural analysis

    NASA Astrophysics Data System (ADS)

    Millwater, H. R.; Wu, Y.-T.

    1993-04-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  7. Modern proposal of methodology for retrieval of characteristic synthetic rainfall hyetographs

    NASA Astrophysics Data System (ADS)

    Licznar, Paweł; Burszta-Adamiak, Ewa; Łomotowski, Janusz; Stańczyk, Justyna

    2017-11-01

    Modern engineering workshop of designing and modelling complex drainage systems is based on hydrodynamic modelling and has a probabilistic character. Its practical application requires a change regarding rainfall models accepted at the input. Previously used artificial rainfall models of simplified form, e.g. block precipitation or Euler's type II model rainfall are no longer sufficient. It is noticeable that urgent clarification is needed as regards the methodology of standardized rainfall hyetographs that would take into consideration the specifics of local storm rainfall temporal dynamics. The aim of the paper is to present a proposal for innovative methodology for determining standardized rainfall hyetographs, based on statistical processing of the collection of actual local precipitation characteristics. Proposed methodology is based on the classification of standardized rainfall hyetographs with the use of cluster analysis. Its application is presented on the example of selected rain gauges localized in Poland. Synthetic rainfall hyetographs achieved as a final result may be used for hydrodynamic modelling of sewerage systems, including probabilistic detection of necessary capacity of retention reservoirs.

  8. Spatiotemporal movement planning and rapid adaptation for manual interaction.

    PubMed

    Huber, Markus; Kupferberg, Aleksandra; Lenz, Claus; Knoll, Alois; Brandt, Thomas; Glasauer, Stefan

    2013-01-01

    Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot's joint configuration. Modeling the task was based on the assumption that the receiver's decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.

  9. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  10. A simulation model for probabilistic analysis of Space Shuttle abort modes

    NASA Technical Reports Server (NTRS)

    Hage, R. T.

    1993-01-01

    A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.

  11. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  12. Probabilistic Estimates of Global Mean Sea Level and its Underlying Processes

    NASA Astrophysics Data System (ADS)

    Hay, C.; Morrow, E.; Kopp, R. E.; Mitrovica, J. X.

    2015-12-01

    Local sea level can vary significantly from the global mean value due to a suite of processes that includes ongoing sea-level changes due to the last ice age, land water storage, ocean circulation changes, and non-uniform sea-level changes that arise when modern-day land ice rapidly melts. Understanding these sources of spatial and temporal variability is critical to estimating past and present sea-level change and projecting future sea-level rise. Using two probabilistic techniques, a multi-model Kalman smoother and Gaussian process regression, we have reanalyzed 20th century tide gauge observations to produce a new estimate of global mean sea level (GMSL). Our methods allow us to extract global information from the sparse tide gauge field by taking advantage of the physics-based and model-derived geometry of the contributing processes. Both methods provide constraints on the sea-level contribution of glacial isostatic adjustment (GIA). The Kalman smoother tests multiple discrete models of glacial isostatic adjustment (GIA), probabilistically computing the most likely GIA model given the observations, while the Gaussian process regression characterizes the prior covariance structure of a suite of GIA models and then uses this structure to estimate the posterior distribution of local rates of GIA-induced sea-level change. We present the two methodologies, the model-derived geometries of the underlying processes, and our new probabilistic estimates of GMSL and GIA.

  13. Probabilistic topic modeling for the analysis and classification of genomic sequences

    PubMed Central

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  14. Predicting Rib Fracture Risk With Whole-Body Finite Element Models: Development and Preliminary Evaluation of a Probabilistic Analytical Framework

    PubMed Central

    Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria

    2012-01-01

    This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122

  15. Numerical Representation of Wintertime Near-Surface Inversions in the Arctic with a 2.5-km Version of the Global Environmental Multiscale (GEM) Model

    NASA Astrophysics Data System (ADS)

    Dehghan, A.; Mariani, Z.; Gascon, G.; Bélair, S.; Milbrandt, J.; Joe, P. I.; Crawford, R.; Melo, S.

    2017-12-01

    Environment and Climate Change Canada (ECCC) is implementing a 2.5-km resolution version of the Global Environmental Multiscale (GEM) model over the Canadian Arctic. Radiosonde observations were used to evaluate the numerical representation of surface-based temperature inversion which is a major feature in the Arctic region. Arctic surface-based inversions are often created by imbalance between radiative cooling processes at surface and warm air advection above. This can have a significant effect on vertical mixing of pollutants and moisture, and ultimately, on cloud formation. It is therefore important to correctly predict the existence of surface inversions along with their characteristics (i.e., intensity and depth). Previous climatological studies showed that the frequency and intensity of surface-based inversions are larger during colder months in the Arctic. Therefore, surface-based inversions were estimated using radiosonde measurements during winter (December 2015 to February 2016) at Iqaluit (Nunavut, Canada). Results show that the inversion intensity can exceed 10 K with depths as large as 1 km. Preliminary evaluation of GEM outputs reveals that the model tends to underestimate the intensity of near-surface inversions, and in some cases, the model failed to predict an inversion. This study presents the factors contributing to this bias including surface temperature and snow cover.

  16. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  17. Mode identification using stochastic hybrid models with applications to conflict detection and resolution

    NASA Astrophysics Data System (ADS)

    Naseri Kouzehgarani, Asal

    2009-12-01

    Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.

  18. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, Frédéric; Bazin, Lucie; Capron, Emilie; Landais, Amaëlle; Lemieux-Dudon, Bénédicte; Masson-Delmotte, Valérie

    2016-04-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age scale uncertainty are essential to interpret the climate and environmental records that they contain. It is however a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and air dated horizons, ice and air depth intervals with known durations, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 chronology for 4 Antarctic ice cores and 1 Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono is demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals with known durations, correlated observations, observations as gas intervals with known durations and observations as mixed ice-air stratigraphic links. IceChrono1 is freely available under the GPL v3 open source license.

  19. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, F.; Bazin, L.; Capron, E.; Landais, A.; Lemieux-Dudon, B.; Masson-Delmotte, V.

    2015-05-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age-scale uncertainty are essential to interpret the climate and environmental records that they contain. It is, however, a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the lock-in depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice- and air-dated horizons, ice and air depth intervals with known durations, depth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 (Antarctic ice core chronology) for four Antarctic ice cores and one Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono1 are demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono1 and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals with known durations, correlated observations, observations as air intervals with known durations and observations as mixed ice-air stratigraphic links. IceChrono1 is freely available under the General Public License v3 open source license.

  20. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    PubMed

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  1. Probabilistic forecasting of extreme weather events based on extreme value theory

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert

    2016-04-01

    Extreme events in weather and climate such as high wind gusts, heavy precipitation or extreme temperatures are commonly associated with high impacts on both environment and society. Forecasting extreme weather events is difficult, and very high-resolution models are needed to describe explicitly extreme weather phenomena. A prediction system for such events should therefore preferably be probabilistic in nature. Probabilistic forecasts and state estimations are nowadays common in the numerical weather prediction community. In this work, we develop a new probabilistic framework based on extreme value theory that aims to provide early warnings up to several days in advance. We consider the combined events when an observation variable Y (for instance wind speed) exceeds a high threshold y and its corresponding deterministic forecasts X also exceeds a high forecast threshold y. More specifically two problems are addressed:} We consider pairs (X,Y) of extreme events where X represents a deterministic forecast, and Y the observation variable (for instance wind speed). More specifically two problems are addressed: Given a high forecast X=x_0, what is the probability that Y>y? In other words: provide inference on the conditional probability: [ Pr{Y>y|X=x_0}. ] Given a probabilistic model for Problem 1, what is the impact on the verification analysis of extreme events. These problems can be solved with bivariate extremes (Coles, 2001), and the verification analysis in (Ferro, 2007). We apply the Ramos and Ledford (2009) parametric model for bivariate tail estimation of the pair (X,Y). The model accommodates different types of extremal dependence and asymmetry within a parsimonious representation. Results are presented using the ensemble reforecast system of the European Centre of Weather Forecasts (Hagedorn, 2008). Coles, S. (2001) An Introduction to Statistical modelling of Extreme Values. Springer-Verlag.Ferro, C.A.T. (2007) A probability model for verifying deterministic forecasts of extreme events. Wea. Forecasting {22}, 1089-1100.Hagedorn, R. (2008) Using the ECMWF reforecast dataset to calibrate EPS forecasts. ECMWF Newsletter, {117}, 8-13.Ramos, A., Ledford, A. (2009) A new class of models for bivariate joint tails. J.R. Statist. Soc. B {71}, 219-241.

  2. Assessing uncertainties in global cropland futures using a conditional probabilistic modelling framework

    NASA Astrophysics Data System (ADS)

    Engström, Kerstin; Olin, Stefan; Rounsevell, Mark D. A.; Brogaard, Sara; van Vuuren, Detlef P.; Alexander, Peter; Murray-Rust, Dave; Arneth, Almut

    2016-11-01

    We present a modelling framework to simulate probabilistic futures of global cropland areas that are conditional on the SSP (shared socio-economic pathway) scenarios. Simulations are based on the Parsimonious Land Use Model (PLUM) linked with the global dynamic vegetation model LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) using socio-economic data from the SSPs and climate data from the RCPs (representative concentration pathways). The simulated range of global cropland is 893-2380 Mha in 2100 (± 1 standard deviation), with the main uncertainties arising from differences in the socio-economic conditions prescribed by the SSP scenarios and the assumptions that underpin the translation of qualitative SSP storylines into quantitative model input parameters. Uncertainties in the assumptions for population growth, technological change and cropland degradation were found to be the most important for global cropland, while uncertainty in food consumption had less influence on the results. The uncertainties arising from climate variability and the differences between climate change scenarios do not strongly affect the range of global cropland futures. Some overlap occurred across all of the conditional probabilistic futures, except for those based on SSP3. We conclude that completely different socio-economic and climate change futures, although sharing low to medium population development, can result in very similar cropland areas on the aggregated global scale.

  3. High-Resolution Underwater Mapping Using Side-Scan Sonar

    PubMed Central

    2016-01-01

    The goal of this study is to generate high-resolution sea floor maps using a Side-Scan Sonar(SSS). This is achieved by explicitly taking into account the SSS operation as follows. First, the raw sensor data is corrected by means of a physics-based SSS model. Second, the data is projected to the sea-floor. The errors involved in this projection are thoroughfully analysed. Third, a probabilistic SSS model is defined and used to estimate the probability of each sea-floor region to be observed. This probabilistic information is then used to weight the contribution of each SSS measurement to the map. Because of these models, arbitrary map resolutions can be achieved, even beyond the sensor resolution. Finally, a geometric map building method is presented and combined with the probabilistic approach. The resulting map is composed of two layers. The echo intensity layer holds the most likely echo intensities at each point in the sea-floor. The probabilistic layer contains information about how confident can the user or the higher control layers be about the echo intensity layer data. Experimental results have been conducted in a large subsea region. PMID:26821379

  4. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    PubMed

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.

  5. Lebedev acceleration and comparison of different photometric models in the inversion of lightcurves for asteroids

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao

    2018-04-01

    In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.

  6. A building block for hardware belief networks.

    PubMed

    Behin-Aein, Behtash; Diep, Vinh; Datta, Supriyo

    2016-07-21

    Belief networks represent a powerful approach to problems involving probabilistic inference, but much of the work in this area is software based utilizing standard deterministic hardware based on the transistor which provides the gain and directionality needed to interconnect billions of them into useful networks. This paper proposes a transistor like device that could provide an analogous building block for probabilistic networks. We present two proof-of-concept examples of belief networks, one reciprocal and one non-reciprocal, implemented using the proposed device which is simulated using experimentally benchmarked models.

  7. Near Real-Time Probabilistic Damage Diagnosis Using Surrogate Modeling and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Zubair, Mohammad; Ranjan, Desh

    2017-01-01

    This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.

  8. Stochastic Human Exposure and Dose Simulation Model for Pesticides

    EPA Science Inventory

    SHEDS-Pesticides (Stochastic Human Exposure and Dose Simulation Model for Pesticides) is a physically-based stochastic model developed to quantify exposure and dose of humans to multimedia, multipathway pollutants. Probabilistic inputs are combined in physical/mechanistic algorit...

  9. NEXT GENERATION MULTIMEDIA/MULTIPATHWAY EXPOSURE MODELING

    EPA Science Inventory

    The Stochastic Human Exposure and Dose Simulation model for pesticides (SHEDS-Pesticides) supports the efforts of EPA to better understand human exposures and doses to multimedia, multipathway pollutants. It is a physically-based, probabilistic computer model that predicts, for u...

  10. Spatiotemporal monitoring of soil water content profiles in an irrigated field using probabilistic inversion of time-lapse EMI data

    NASA Astrophysics Data System (ADS)

    Moghadas, Davood; Jadoon, Khan Zaib; McCabe, Matthew F.

    2017-12-01

    Monitoring spatiotemporal variations of soil water content (θ) is important across a range of research fields, including agricultural engineering, hydrology, meteorology and climatology. Low frequency electromagnetic induction (EMI) systems have proven to be useful tools in mapping soil apparent electrical conductivity (σa) and soil moisture. However, obtaining depth profile water content is an area that has not been fully explored using EMI. To examine this, we performed time-lapse EMI measurements using a CMD mini-Explorer sensor along a 10 m transect of a maize field over a 6 day period. Reference data were measured at the end of the profile via an excavated pit using 5TE capacitance sensors. In order to derive a time-lapse, depth-specific subsurface image of electrical conductivity (σ), we applied a probabilistic sampling approach, DREAM(ZS) , on the measured EMI data. The inversely estimated σ values were subsequently converted to θ using the Rhoades et al. (1976) petrophysical relationship. The uncertainties in measured σa, as well as inaccuracies in the inverted data, introduced some discrepancies between estimated σ and reference values in time and space. Moreover, the disparity between the measurement footprints of the 5TE and CMD Mini-Explorer sensors also led to differences. The obtained θ permitted an accurate monitoring of the spatiotemporal distribution and variation of soil water content due to root water uptake and evaporation. The proposed EMI measurement and modeling technique also allowed for detecting temporal root zone soil moisture variations. The time-lapse θ monitoring approach developed using DREAM(ZS) thus appears to be a useful technique to understand spatiotemporal patterns of soil water content and provide insights into linked soil moisture vegetation processes and the dynamics of soil moisture/infiltration processes.

  11. Cost-effectiveness analysis of EGFR mutation testing in patients with non-small cell lung cancer (NSCLC) with gefitinib or carboplatin-paclitaxel.

    PubMed

    Arrieta, Oscar; Anaya, Pablo; Morales-Oyarvide, Vicente; Ramírez-Tirado, Laura Alejandra; Polanco, Ana C

    2016-09-01

    Assess the cost-effectiveness of an EGFR-mutation testing strategy for advanced NSCLC in first-line therapy with either gefitinib or carboplatin-paclitaxel in Mexican institutions. Cost-effectiveness analysis using a discrete event simulation (DES) model to simulate two therapeutic strategies in patients with advanced NSCLC. Strategy one included patients tested for EGFR-mutation and therapy given accordingly. Strategy two included chemotherapy for all patients without testing. All results are presented in 2014 US dollars. The analysis was made with data from the Mexican frequency of EGFR-mutation. A univariate sensitivity analysis was conducted on EGFR prevalence. Progression-free survival (PFS) transition probabilities were estimated on data from the IPASS and simulated with a Weibull distribution, run with parallel trials to calculate a probabilistic sensitivity analysis. PFS of patients in the testing strategy was 6.76 months (95 % CI 6.10-7.44) vs 5.85 months (95 % CI 5.43-6.29) in the non-testing group. The one-way sensitivity analysis showed that PFS has a direct relationship with EGFR-mutation prevalence, while the ICER and testing cost have an inverse relationship with EGFR-mutation prevalence. The probabilistic sensitivity analysis showed that all iterations had incremental costs and incremental PFS for strategy 1 in comparison with strategy 2. There is a direct relationship between the ICER and the cost of EGFR testing, with an inverse relationship with the prevalence of EGFR-mutation. When prevalence is >10 % ICER remains constant. This study could impact Mexican and Latin American health policies regarding mutation detection testing and treatment for advanced NSCLC.

  12. A Comparison between Model Base Hardconstrain, Bandlimited, and Sparse-Spike Seismic Inversion: New Insights for CBM Reservoir Modelling on Muara Enim Formation, South Sumatra

    NASA Astrophysics Data System (ADS)

    Mohamad Noor, Faris; Adipta, Agra

    2018-03-01

    Coal Bed Methane (CBM) as a newly developed resource in Indonesia is one of the alternatives to relieve Indonesia’s dependencies on conventional energies. Coal resource of Muara Enim Formation is known as one of the prolific reservoirs in South Sumatra Basin. Seismic inversion and well analysis are done to determine the coal seam characteristics of Muara Enim Formation. This research uses three inversion methods, which are: model base hard- constrain, bandlimited, and sparse-spike inversion. Each type of seismic inversion has its own advantages to display the coal seam and its characteristic. Interpretation result from the analysis data shows that the Muara Enim coal seam has 20 (API) gamma ray value, 1 (gr/cc) – 1.4 (gr/cc) from density log, and low AI cutoff value range between 5000-6400 (m/s)*(g/cc). The distribution of coal seam is laterally thinning northwest to southeast. Coal seam is seen biasedly on model base hard constraint inversion and discontinued on band-limited inversion which isn’t similar to the geological model. The appropriate AI inversion is sparse spike inversion which has 0.884757 value from cross plot inversion as the best correlation value among the chosen inversion methods. Sparse Spike inversion its self-has high amplitude as a proper tool to identify coal seam continuity which commonly appears as a thin layer. Cross-sectional sparse spike inversion shows that there are possible new boreholes in CDP 3662-3722, CDP 3586-3622, and CDP 4004-4148 which is seen in seismic data as a thick coal seam.

  13. A model-based test for treatment effects with probabilistic classifications.

    PubMed

    Cavagnaro, Daniel R; Davis-Stober, Clintin P

    2018-05-21

    Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher's exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher's exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  15. Stochastic simulation of predictive space–time scenarios of wind speed using observations and physical model outputs

    DOE PAGES

    Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai

    2018-03-01

    We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less

  16. Infrared maritime target detection using a probabilistic single Gaussian model of sea clutter in Fourier domain

    NASA Astrophysics Data System (ADS)

    Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei

    2018-02-01

    For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.

  17. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    ERIC Educational Resources Information Center

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  18. Probabilistic structural analysis of space propulsion system LOX post

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Rajagopal, K. R.; Ho, H. W.; Cunniff, J. M.

    1990-01-01

    The probabilistic structural analysis program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress; Cruse et al., 1988) is applied to characterize the dynamic loading and response of the Space Shuttle main engine (SSME) LOX post. The design and operation of the SSME are reviewed; the LOX post structure is described; and particular attention is given to the generation of composite load spectra, the finite-element model of the LOX post, and the steps in the NESSUS structural analysis. The results are presented in extensive tables and graphs, and it is shown that NESSUS correctly predicts the structural effects of changes in the temperature loading. The probabilistic approach also facilitates (1) damage assessments for a given failure model (based on gas temperature, heat-shield gap, and material properties) and (2) correlation of the gas temperature with operational parameters such as engine thrust.

  19. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  20. Constraining LLSVP Buoyancy With Tidal Tomography

    NASA Astrophysics Data System (ADS)

    Lau, H. C. P.; Mitrovica, J. X.; Davis, J. L.; Tromp, J.; Yang, H. Y.; Al-Attar, D.

    2017-12-01

    Using a global GPS data set of high precision measurements of the Earth's body tide, we perform a tomographic inversion to constrain the integrated buoyancy of the Large Low Shear Velocity Provinces (LLSVPs) at the base of the mantle. As a consequence of the long-wavelength and low frequency nature of the Earth's body tide, these observations are particularly sensitivity to LLSVP buoyancy, a property of Earth's mantle that remains a source of ongoing debate. Using a probabilistic approach we find that the data are best fit when the bottom two thirds ( 700 km) of the LLSVPs have an integrated excess density of 0.60%. The detailed distribution of this buoyancy, for example whether it primarily resides in a thin layer at the base of the mantle, will require further testing and the augmentation of the inversions to include independent data sets (e.g., seismic observations). In any case, our inference of excess density requires the preservation of chemical heterogeneity associated with the enrichment of high-density chemical components, possibly linked to subducted oceanic plates and/or primordial material, in the deep mantle. This conclusion has important implications for the stability of these structures and, in turn, the history and ongoing evolution of the Earth system.

  1. SHEDS-HT: An Integrated Probabilistic Exposure Model for Prioritizing Exposures to Chemicals with Near-Field and Dietary Sources

    EPA Science Inventory

    United States Environmental Protection Agency (USEPA) researchers are developing a strategy for highthroughput (HT) exposure-based prioritization of chemicals under the ExpoCast program. These novel modeling approaches for evaluating chemicals based on their potential for biologi...

  2. Abstract probabilistic CNOT gate model based on double encoding: study of the errors and physical realizability

    NASA Astrophysics Data System (ADS)

    Gueddana, Amor; Attia, Moez; Chatta, Rihab

    2015-03-01

    In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.

  3. A robust multi-kernel change detection framework for detecting leaf beetle defoliation using Landsat 7 ETM+ data

    NASA Astrophysics Data System (ADS)

    Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim

    2016-12-01

    A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.

  4. Dietary Iron Bioavailability: Agreement between Estimation Methods and Association with Serum Ferritin Concentrations in Women of Childbearing Age

    PubMed Central

    Dias, Gisele Cristina; Morimoto, Juliana Massami; Marchioni, Dirce Maria Lobo; Colli, Célia

    2018-01-01

    Predictive iron bioavailability (FeBio) methods aimed at evaluating the association between diet and body iron have been proposed, but few studies explored their validity and practical usefulness in epidemiological studies. In this cross-sectional study involving 127 women (18–42 years) with presumably steady-state body iron balance, correlations were checked among various FeBio estimates (probabilistic approach and meal-based and diet-based algorithms) and serum ferritin (SF) concentrations. Iron deficiency was defined as SF < 15 µg/L. Pearson correlation, Friedman test, and linear regression were employed. Iron intake and prevalence of iron deficiency were 10.9 mg/day and 12.6%. Algorithm estimates were strongly correlated (0.69≤ r ≥0.85; p < 0.001), although diet-based models (8.5–8.9%) diverged from meal-based models (11.6–12.8%; p < 0.001). Still, all algorithms underestimated the probabilistic approach (17.2%). No significant association was found between SF and FeBio from Monsen (1978), Reddy (2000), and Armah (2013) algorithms. Nevertheless, there was a 30–37% difference in SF concentrations between women stratified at extreme tertiles of FeBio from Hallberg and Hulthén (2000) and Collings’ (2013) models. The results demonstrate discordance of FeBio from probabilistic approach and algorithm methods while suggesting two models with best performances to rank individuals according to their bioavailable iron intakes. PMID:29883384

  5. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  6. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  7. Encoding probabilistic brain atlases using Bayesian inference.

    PubMed

    Van Leemput, Koen

    2009-06-01

    This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.

  8. A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS

    EPA Science Inventory

    We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...

  9. Evaluation of feature-based 3-d registration of probabilistic volumetric scenes

    NASA Astrophysics Data System (ADS)

    Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.

    2014-12-01

    Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.

  10. Development of probabilistic thinking-oriented learning tools for probability materials at junior high school students

    NASA Astrophysics Data System (ADS)

    Sari, Dwi Ivayana; Hermanto, Didik

    2017-08-01

    This research is a developmental research of probabilistic thinking-oriented learning tools for probability materials at ninth grade students. This study is aimed to produce a good probabilistic thinking-oriented learning tools. The subjects were IX-A students of MTs Model Bangkalan. The stages of this development research used 4-D development model which has been modified into define, design and develop. Teaching learning tools consist of lesson plan, students' worksheet, learning teaching media and students' achievement test. The research instrument used was a sheet of learning tools validation, a sheet of teachers' activities, a sheet of students' activities, students' response questionnaire and students' achievement test. The result of those instruments were analyzed descriptively to answer research objectives. The result was teaching learning tools in which oriented to probabilistic thinking of probability at ninth grade students which has been valid. Since teaching and learning tools have been revised based on validation, and after experiment in class produced that teachers' ability in managing class was effective, students' activities were good, students' responses to the learning tools were positive and the validity, sensitivity and reliability category toward achievement test. In summary, this teaching learning tools can be used by teacher to teach probability for develop students' probabilistic thinking.

  11. Finding models to detect Alzheimer's disease by fusing structural and neuropsychological information

    NASA Astrophysics Data System (ADS)

    Giraldo, Diana L.; García-Arteaga, Juan D.; Velasco, Nelson; Romero, Eduardo

    2015-12-01

    Alzheimer's disease (AD) is a neurodegenerative disease that affects higher brain functions. Initial diagnosis of AD is based on the patient's clinical history and a battery of neuropsychological tests. The accuracy of the diagnosis is highly dependent on the examiner's skills and on the evolution of a variable clinical frame. This work presents an automatic strategy that learns probabilistic brain models for different stages of the disease, reducing the complexity, parameter adjustment and computational costs. The proposed method starts by setting a probabilistic class description using the information stored in the neuropsychological test, followed by constructing the different structural class models using membership values from the learned probabilistic functions. These models are then used as a reference frame for the classification problem: a new case is assigned to a particular class simply by projecting to the different models. The validation was performed using a leave-one-out cross-validation, two classes were used: Normal Control (NC) subjects and patients diagnosed with mild AD. In this experiment it is possible to achieve a sensibility and specificity of 80% and 79% respectively.

  12. MODELING HUMAN EXPOSURES AND DOSE USING A 2-DIMENSIONAL MONTE-CARLO MODEL (SHEDS)

    EPA Science Inventory

    Since 1998, US EPA's National Exposure Research Laboratory (NERL) has been developing the Stochastic Human Exposure and Dose Simulation (SHEDS) model for various classes of pollutants. SHEDS is a physically-based probabilistic model intended for improving estimates of human ex...

  13. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  14. Probabilistic dietary exposure assessment taking into account variability in both amount and frequency of consumption.

    PubMed

    Slob, Wout

    2006-07-01

    Probabilistic dietary exposure assessments that are fully based on Monte Carlo sampling from the raw intake data may not be appropriate. This paper shows that the data should first be analysed by using a statistical model that is able to take the various dimensions of food consumption patterns into account. A (parametric) model is discussed that takes into account the interindividual variation in (daily) consumption frequencies, as well as in amounts consumed. Further, the model can be used to include covariates, such as age, sex, or other individual attributes. Some illustrative examples show how this model may be used to estimate the probability of exceeding an (acute or chronic) exposure limit. These results are compared with the results based on directly counting the fraction of observed intakes exceeding the limit value. This comparison shows that the latter method is not adequate, in particular for the acute exposure situation. A two-step approach for probabilistic (acute) exposure assessment is proposed: first analyse the consumption data by a (parametric) statistical model as discussed in this paper, and then use Monte Carlo techniques for combining the variation in concentrations with the variation in consumption (by sampling from the statistical model). This approach results in an estimate of the fraction of the population as a function of the fraction of days at which the exposure limit is exceeded by the individual.

  15. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks

    PubMed Central

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-01-01

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency. PMID:28208735

  16. Inclusion of potential vorticity uncertainties into a hydrometeorological forecasting chain: application to a medium size basin of Mediterranean Spain

    NASA Astrophysics Data System (ADS)

    Amengual, A.; Romero, R.; Vich, M.; Alonso, S.

    2009-06-01

    The improvement of the short- and mid-range numerical runoff forecasts over the flood-prone Spanish Mediterranean area is a challenging issue. This work analyses four intense precipitation events which produced floods of different magnitude over the Llobregat river basin, a medium size catchment located in Catalonia, north-eastern Spain. One of them was a devasting flash flood - known as the "Montserrat" event - which produced 5 fatalities and material losses estimated at about 65 million euros. The characterization of the Llobregat basin's hydrological response to these floods is first assessed by using rain-gauge data and the Hydrologic Engineering Center's Hydrological Modeling System (HEC-HMS) runoff model. In second place, the non-hydrostatic fifth-generation Pennsylvania State University/NCAR mesoscale model (MM5) is nested within the ECMWF large-scale forecast fields in a set of 54 h period simulations to provide quantitative precipitation forecasts (QPFs) for each hydrometeorological episode. The hydrological model is forced with these QPFs to evaluate the reliability of the resulting discharge forecasts, while an ensemble prediction system (EPS) based on perturbed atmospheric initial and boundary conditions has been designed to test the value of a probabilistic strategy versus the previous deterministic approach. Specifically, a Potential Vorticity (PV) Inversion technique has been used to perturb the MM5 model initial and boundary states (i.e. ECMWF forecast fields). For that purpose, a PV error climatology has been previously derived in order to introduce realistic PV perturbations in the EPS. Results show the benefits of using a probabilistic approach in those cases where the deterministic QPF presents significant deficiencies over the Llobregat river basin in terms of the rainfall amounts, timing and localization. These deficiences in precipitation fields have a major impact on flood forecasts. Our ensemble strategy has been found useful to reduce the biases at different hydrometric sections along the watershed. Therefore, in an operational context, the devised methodology could be useful to expand the lead times associated with the prediction of similar future floods, helping to alleviate their possible hazardous consequences.

  17. Inclusion of potential vorticity uncertainties into a hydrometeorological forecasting chain: application to a medium size basin of Mediterranean Spain

    NASA Astrophysics Data System (ADS)

    Amengual, A.; Romero, R.; Vich, M.; Alonso, S.

    2009-01-01

    The improvement of the short- and mid-range numerical runoff forecasts over the flood-prone Spanish Mediterranean area is a challenging issue. This work analyses four intense precipitation events which produced floods of different magnitude over the Llobregat river basin, a medium size catchment located in Catalonia, north-eastern Spain. One of them was a devasting flash flood - known as the "Montserrat" event - which produced 5 fatalities and material losses estimated at about 65 million euros. The characterization of the Llobregat basin's hydrological response to these floods is first assessed by using rain-gauge data and the Hydrologic Engineering Center's Hydrological Modeling System (HEC-HMS) runoff model. In second place, the non-hydrostatic fifth-generation Pennsylvania State University/NCAR mesoscale model (MM5) is nested within the ECMWF large-scale forecast fields in a set of 54 h period simulations to provide quantitative precipitation forecasts (QPFs) for each hydrometeorological episode. The hydrological model is forced with these QPFs to evaluate the reliability of the resulting discharge forecasts, while an ensemble prediction system (EPS) based on perturbed atmospheric initial and boundary conditions has been designed to test the value of a probabilistic strategy versus the previous deterministic approach. Specifically, a Potential Vorticity (PV) Inversion technique has been used to perturb the MM5 model initial and boundary states (i.e. ECMWF forecast fields). For that purpose, a PV error climatology has been previously derived in order to introduce realistic PV perturbations in the EPS. Results show the benefits of using a probabilistic approach in those cases where the deterministic QPF presents significant deficiencies over the Llobregat river basin in terms of the rainfall amounts, timing and localization. These deficiences in precipitation fields have a major impact on flood forecasts. Our ensemble strategy has been found useful to reduce the biases at different hydrometric sections along the watershed. Therefore, in an operational context, the devised methodology could be useful to expand the lead times associated with the prediction of similar future floods, helping to alleviate their possible hazardous consequences.

  18. Prediction of Reservoir Properties for Geomechanical Analysis Using 3-D Seismic Data and Rock Physics Modeling in the Vaca Muerta Formation, Neuquen Basin, Argentina

    NASA Astrophysics Data System (ADS)

    Convers-Gomez, Carlos E.

    The Vaca Muerta Formation in the Neuquen Basin has recently received a lot of attention from oil companies interested in developing its shale resources. Early identification of potential zones with possible good production is extremely important to optimize the return on capital investment. Developing a work flow in shale plays that associates an effective hydraulic fracture response with the presence of hydrocarbons is crucial for economic success. The vertical and lateral heterogeneity of rock properties are critical factors that impact production. The integration of 3D seismic and well data is necessary for prediction of rock properties and identifies their distribution in the rock, which can also be integrated with geomechanical properties to model the rock response favorable to hydraulic stimulation. This study includes a 3D seismic survey and six vertical wells with full log suites in each well. The well logs allowed for the computation of a pre-stack model-based inversion which uses seismic data to estimate rock property volumes. An inverse relationship between P-impedance and Total Organic Content (TOC) was observed and quantified. Likewise, a direct relationship between P-impedance and volume of carbonate was observed. The volume of kerogen, type of clay, type of carbonate and fluid pressure all control the geomechanical properties of the formation when subject to hydraulic fracturing. Probabilistic Neural Networks were then used to predict the lateral and vertical heterogeneity of rock properties. TOC and volume of kerogen behaved as adequate indicators of possible zones with high presence of hydrocarbons. Meanwhile, the volume of carbonate was a valid indicator of brittle-ductile rock. The predicted density volume was used to estimate geomechanical properties (Young's Modulus and Poisson's Ratio) and to identify the zones that have a better response to hydraulic stimulation. During the analysis of geomechanical properties, Young's Modulus was observed to have a direct relationship with volume of carbonate and an inverse relationship with TOC, enabling the identification of brittle and ductile rocks zones. The analysis detected zones that had a good presence of hydrocarbons and brittle rock. The information was integrated with the analysis of geomechanical properties generating a model with the most possible zones of good production. This model will aid in the future exploration and development of the Vaca Muerta Formation.

  19. Saul: Towards Declarative Learning Based Programming

    PubMed Central

    Kordjamshidi, Parisa; Roth, Dan; Wu, Hao

    2015-01-01

    We present Saul, a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints. We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction. PMID:26635465

  20. Saul: Towards Declarative Learning Based Programming.

    PubMed

    Kordjamshidi, Parisa; Roth, Dan; Wu, Hao

    2015-07-01

    We present Saul , a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints. We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction.

  1. Probabilistic Climate Scenario Information for Risk Assessment

    NASA Astrophysics Data System (ADS)

    Dairaku, K.; Ueno, G.; Takayabu, I.

    2014-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in Japan, we compared the physics ensemble experiments using the 60km global atmospheric model of the Meteorological Research Institute (MRI-AGCM) with multi-model ensemble experiments with global atmospheric-ocean coupled models (CMIP3) of SRES A1b scenario experiments. The MRI-AGCM shows relatively good skills particularly in tropics for temperature and geopotential height. Variability in surface air temperature of physical ensemble experiments with MRI-AGCM was within the range of one standard deviation of the CMIP3 model in the Asia region. On the other hand, the variability of precipitation was relatively well represented compared with the variation of the CMIP3 models. Models which show the similar reproducibility in the present climate shows different future climate change. We couldn't find clear relationships between present climate and future climate change in temperature and precipitation. We develop a new method to produce probabilistic information of climate change scenarios by weighting model ensemble experiments based on a regression model (Krishnamurti et al., Science, 1999). The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. The prototype of probabilistic information in Japan represents the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Acknowledgments: This study was supported by the SOUSEI Program, funded by Ministry of Education, Culture, Sports, Science and Technology, Government of Japan.

  2. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  3. A Novel Probabilistic Multi-Scale Modeling and Sensing Framework for Fatigue Life Prediction of Aerospace Structures and Materials: DCT Project

    DTIC Science & Technology

    2012-08-25

    Accel- erated Crystal Plasticity FEM Simulations (submitted). 5. M. Anahid, M. Samal and S. Ghosh, Dwell fatigue crack nucleation model based on using...4] M. Anahid, M. K. Samal , and S. Ghosh. Dwell fatigue crack nucleation model based on crystal plasticity finite element simulations of

  4. Probabilistic material degradation model for aerospace materials subjected to high temperature, mechanical and thermal fatigue, and creep

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1992-01-01

    A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.

  5. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  6. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  7. Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment

    NASA Astrophysics Data System (ADS)

    Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin

    2018-04-01

    Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.

  8. The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals: Dietary Module Version 1: Technical Manual

    EPA Pesticide Factsheets

    SHEDS - Multimedia is EPA's premier physically-based, probabilistic model, that can simulate cumulative or aggregate exposures for a population across a variety of multimedia, multipathway environmental chemicals.

  9. Constraining ozone-precursor responsiveness using ambient measurements

    EPA Science Inventory

    This study develops probabilistic estimates of ozone (O3) sensitivities to precursoremissions by incorporating uncertainties in photochemical modeling and evaluating modelperformance based on ground-level observations of O3 and oxides of nitrogen (NOx).Uncertainties in model form...

  10. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  11. Chapter 8: US geological survey Circum-Arctic Resource Appraisal (CARA): Introduction and summary of organization and methods

    USGS Publications Warehouse

    Charpentier, R.R.; Gautier, D.L.

    2011-01-01

    The USGS has assessed undiscovered petroleum resources in the Arctic through geological mapping, basin analysis and quantitative assessment. The new map compilation provided the base from which geologists subdivided the Arctic for burial history modelling and quantitative assessment. The CARA was a probabilistic, geologically based study that used existing USGS methodology, modified somewhat for the circumstances of the Arctic. The assessment relied heavily on analogue modelling, with numerical input as lognormal distributions of sizes and numbers of undiscovered accumulations. Probabilistic results for individual assessment units were statistically aggregated taking geological dependencies into account. Fourteen papers in this Geological Society volume present summaries of various aspects of the CARA. ?? 2011 The Geological Society of London.

  12. A probabilistic framework to infer brain functional connectivity from anatomical connections.

    PubMed

    Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel

    2011-01-01

    We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.

  13. Low-Dimensional Statistics of Anatomical Variability via Compact Representation of Image Deformations.

    PubMed

    Zhang, Miaomiao; Wells, William M; Golland, Polina

    2016-10-01

    Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).

  14. Dynamic sensing model for accurate delectability of environmental phenomena using event wireless sensor network

    NASA Astrophysics Data System (ADS)

    Missif, Lial Raja; Kadhum, Mohammad M.

    2017-09-01

    Wireless Sensor Network (WSN) has been widely used for monitoring where sensors are deployed to operate independently to sense abnormal phenomena. Most of the proposed environmental monitoring systems are designed based on a predetermined sensing range which does not reflect the sensor reliability, event characteristics, and the environment conditions. Measuring of the capability of a sensor node to accurately detect an event within a sensing field is of great important for monitoring applications. This paper presents an efficient mechanism for even detection based on probabilistic sensing model. Different models have been presented theoretically in this paper to examine their adaptability and applicability to the real environment applications. The numerical results of the experimental evaluation have showed that the probabilistic sensing model provides accurate observation and delectability of an event, and it can be utilized for different environment scenarios.

  15. Spatial planning using probabilistic flood maps

    NASA Astrophysics Data System (ADS)

    Alfonso, Leonardo; Mukolwe, Micah; Di Baldassarre, Giuliano

    2015-04-01

    Probabilistic flood maps account for uncertainty in flood inundation modelling and convey a degree of certainty in the outputs. Major sources of uncertainty include input data, topographic data, model structure, observation data and parametric uncertainty. Decision makers prefer less ambiguous information from modellers; this implies that uncertainty is suppressed to yield binary flood maps. Though, suppressing information may potentially lead to either surprise or misleading decisions. Inclusion of uncertain information in the decision making process is therefore desirable and transparent. To this end, we utilise the Prospect theory and information from a probabilistic flood map to evaluate potential decisions. Consequences related to the decisions were evaluated using flood risk analysis. Prospect theory explains how choices are made given options for which probabilities of occurrence are known and accounts for decision makers' characteristics such as loss aversion and risk seeking. Our results show that decision making is pronounced when there are high gains and loss, implying higher payoffs and penalties, therefore a higher gamble. Thus the methodology may be appropriately considered when making decisions based on uncertain information.

  16. Reasoning in Reference Games: Individual- vs. Population-Level Probabilistic Modeling

    PubMed Central

    Franke, Michael; Degen, Judith

    2016-01-01

    Recent advances in probabilistic pragmatics have achieved considerable success in modeling speakers’ and listeners’ pragmatic reasoning as probabilistic inference. However, these models are usually applied to population-level data, and so implicitly suggest a homogeneous population without individual differences. Here we investigate potential individual differences in Theory-of-Mind related depth of pragmatic reasoning in so-called reference games that require drawing ad hoc Quantity implicatures of varying complexity. We show by Bayesian model comparison that a model that assumes a heterogenous population is a better predictor of our data, especially for comprehension. We discuss the implications for the treatment of individual differences in probabilistic models of language use. PMID:27149675

  17. Template-based protein structure modeling using the RaptorX web server.

    PubMed

    Källberg, Morten; Wang, Haipeng; Wang, Sheng; Peng, Jian; Wang, Zhiyong; Lu, Hui; Xu, Jinbo

    2012-07-19

    A key challenge of modern biology is to uncover the functional role of the protein entities that compose cellular proteomes. To this end, the availability of reliable three-dimensional atomic models of proteins is often crucial. This protocol presents a community-wide web-based method using RaptorX (http://raptorx.uchicago.edu/) for protein secondary structure prediction, template-based tertiary structure modeling, alignment quality assessment and sophisticated probabilistic alignment sampling. RaptorX distinguishes itself from other servers by the quality of the alignment between a target sequence and one or multiple distantly related template proteins (especially those with sparse sequence profiles) and by a novel nonlinear scoring function and a probabilistic-consistency algorithm. Consequently, RaptorX delivers high-quality structural models for many targets with only remote templates. At present, it takes RaptorX ~35 min to finish processing a sequence of 200 amino acids. Since its official release in August 2011, RaptorX has processed ~6,000 sequences submitted by ~1,600 users from around the world.

  18. RaptorX server: a resource for template-based protein structure modeling.

    PubMed

    Källberg, Morten; Margaryan, Gohar; Wang, Sheng; Ma, Jianzhu; Xu, Jinbo

    2014-01-01

    Assigning functional properties to a newly discovered protein is a key challenge in modern biology. To this end, computational modeling of the three-dimensional atomic arrangement of the amino acid chain is often crucial in determining the role of the protein in biological processes. We present a community-wide web-based protocol, RaptorX server ( http://raptorx.uchicago.edu ), for automated protein secondary structure prediction, template-based tertiary structure modeling, and probabilistic alignment sampling.Given a target sequence, RaptorX server is able to detect even remotely related template sequences by means of a novel nonlinear context-specific alignment potential and probabilistic consistency algorithm. Using the protocol presented here it is thus possible to obtain high-quality structural models for many target protein sequences when only distantly related protein domains have experimentally solved structures. At present, RaptorX server can perform secondary and tertiary structure prediction of a 200 amino acid target sequence in approximately 30 min.

  19. Template-based protein structure modeling using the RaptorX web server

    PubMed Central

    Källberg, Morten; Wang, Haipeng; Wang, Sheng; Peng, Jian; Wang, Zhiyong; Lu, Hui; Xu, Jinbo

    2016-01-01

    A key challenge of modern biology is to uncover the functional role of the protein entities that compose cellular proteomes. To this end, the availability of reliable three-dimensional atomic models of proteins is often crucial. This protocol presents a community-wide web-based method using RaptorX (http://raptorx.uchicago.edu/) for protein secondary structure prediction, template-based tertiary structure modeling, alignment quality assessment and sophisticated probabilistic alignment sampling. RaptorX distinguishes itself from other servers by the quality of the alignment between a target sequence and one or multiple distantly related template proteins (especially those with sparse sequence profiles) and by a novel nonlinear scoring function and a probabilistic-consistency algorithm. Consequently, RaptorX delivers high-quality structural models for many targets with only remote templates. At present, it takes RaptorX ~35 min to finish processing a sequence of 200 amino acids. Since its official release in August 2011, RaptorX has processed ~6,000 sequences submitted by ~1,600 users from around the world. PMID:22814390

  20. Development of Probabilistic Structural Analysis Integrated with Manufacturing Processes

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Nagpal, Vinod K.

    2007-01-01

    An effort has been initiated to integrate manufacturing process simulations with probabilistic structural analyses in order to capture the important impacts of manufacturing uncertainties on component stress levels and life. Two physics-based manufacturing process models (one for powdered metal forging and the other for annular deformation resistance welding) have been linked to the NESSUS structural analysis code. This paper describes the methodology developed to perform this integration including several examples. Although this effort is still underway, particularly for full integration of a probabilistic analysis, the progress to date has been encouraging and a software interface that implements the methodology has been developed. The purpose of this paper is to report this preliminary development.

  1. Probabilistic micromechanics for metal matrix composites

    NASA Astrophysics Data System (ADS)

    Engelstad, S. P.; Reddy, J. N.; Hopkins, Dale A.

    A probabilistic micromechanics-based nonlinear analysis procedure is developed to predict and quantify the variability in the properties of high temperature metal matrix composites. Monte Carlo simulation is used to model the probabilistic distributions of the constituent level properties including fiber, matrix, and interphase properties, volume and void ratios, strengths, fiber misalignment, and nonlinear empirical parameters. The procedure predicts the resultant ply properties and quantifies their statistical scatter. Graphite copper and Silicon Carbide Titanlum Aluminide (SCS-6 TI15) unidirectional plies are considered to demonstrate the predictive capabilities. The procedure is believed to have a high potential for use in material characterization and selection to precede and assist in experimental studies of new high temperature metal matrix composites.

  2. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  3. Probabilistic Cellular Automata

    PubMed Central

    Agapie, Alexandru; Giuclea, Marius

    2014-01-01

    Abstract Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case—connecting the probability of a configuration in the stationary distribution to its number of zero-one borders—the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata. PMID:24999557

  4. Probabilistic cellular automata.

    PubMed

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  5. Balancing the stochastic description of uncertainties as a function of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.

    2016-12-01

    Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.

  6. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  7. Functional mechanisms of probabilistic inference in feature- and space-based attentional systems.

    PubMed

    Dombert, Pascasie L; Kuhns, Anna; Mengotti, Paola; Fink, Gereon R; Vossel, Simone

    2016-11-15

    Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Modular analysis of the probabilistic genetic interaction network.

    PubMed

    Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting

    2011-03-15

    Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.

  9. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  10. Socioeconomic Status and Childhood Cancer Incidence: A Population-Based Multilevel Analysis.

    PubMed

    Kehm, Rebecca D; Spector, Logan G; Poynter, Jenny N; Vock, David M; Osypuk, Theresa L

    2018-05-01

    The etiology of childhood cancers remains largely unknown, especially regarding environmental and behavioral risk factors. Unpacking the association between socioeconomic status (SES) and incidence may offer insight into such etiology. We tested associations between SES and childhood cancer incidence in a population-based case-cohort study (source cohort: Minnesota birth registry, 1989-2014). Cases, ages 0-14 years, were linked from the Minnesota Cancer Surveillance System to birth records through probabilistic record linkage. Controls were 4:1 frequency matched on birth year (2,947 cases and 11,907 controls). We tested associations of individual-level (maternal education) and neighborhood-level (census tract composite index) SES using logistic mixed models. In crude models, maternal education was positively associated with incidence of acute lymphoblastic leukemia (odds ratio (OR) = 1.10, 95% confidence interval (CI): 1.02, 1.19), central nervous system tumors (OR = 1.12, 95% CI: 1.04, 1.21), and neuroblastoma (OR = 1.15, 95% CI: 1.02, 1.30). Adjustment for established risk factors-including race/ethnicity, maternal age, and birth weight-substantially attenuated these positive associations. Similar patterns were observed for neighborhood-level SES. Conversely, higher maternal education was inversely associated with hepatoblastoma incidence (adjusted OR = 0.70, 95% CI: 0.51, 0.98). Overall, beyond the social patterning of established demographic and pregnancy-related exposures, SES is not strongly associated with childhood cancer incidence.

  11. Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weiguang; Fu, Jianyu

    2018-02-01

    Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.

  12. Multi-parametric variational data assimilation for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  13. Development of Response Spectral Ground Motion Prediction Equations from Empirical Models for Fourier Spectra and Duration of Ground Motion

    NASA Astrophysics Data System (ADS)

    Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.

    2014-12-01

    In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σ<0.5 in log units) in comparison to other regional models, at shorter periods brands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.

  14. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device

    PubMed Central

    He, Xiang; Aloi, Daniel N.; Li, Jia

    2015-01-01

    Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design. PMID:26694387

  15. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device.

    PubMed

    He, Xiang; Aloi, Daniel N; Li, Jia

    2015-12-14

    Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design.

  16. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  17. Global Monthly CO2 Flux Inversion Based on Results of Terrestrial Ecosystem Modeling

    NASA Astrophysics Data System (ADS)

    Deng, F.; Chen, J.; Peters, W.; Krol, M.

    2008-12-01

    Most of our understanding of the sources and sinks of atmospheric CO2 has come from inverse studies of atmospheric CO2 concentration measurements. However, the number of currently available observation stations and our ability to simulate the diurnal planetary boundary layer evolution over continental regions essentially limit the number of regions that can be reliably inverted globally, especially over continental areas. In order to overcome these restrictions, a nested inverse modeling system was developed based on the Bayesian principle for estimating carbon fluxes of 30 regions in North America and 20 regions for the rest of the globe. Inverse modeling was conducted in monthly steps using CO2 concentration measurements of 5 years (2000 - 2005) with the following two models: (a) An atmospheric transport model (TM5) is used to generate the transport matrix where the diurnal variation n of atmospheric CO2 concentration is considered to enhance the use of the afternoon-hour average CO2 concentration measurements over the continental sites. (b) A process-based terrestrial ecosystem model (BEPS) is used to produce hourly step carbon fluxes, which could minimize the limitation due to our inability to solve the inverse problem in a high resolution, as the background of our inversion. We will present our recent results achieved through a combination of the bottom-up modeling with BEPS and the top-down modeling based on TM5 driven by offline meteorological fields generated by the European Centre for Medium Range Weather Forecast (ECMFW).

  18. Probabilistic Common Spatial Patterns for Multichannel EEG Analysis

    PubMed Central

    Chen, Zhe; Gao, Xiaorong; Li, Yuanqing; Brown, Emery N.; Gao, Shangkai

    2015-01-01

    Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task. PMID:26005228

  19. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  20. Probabilistic structural analysis methods of hot engine structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Hopkins, D. A.

    1989-01-01

    Development of probabilistic structural analysis methods for hot engine structures is a major activity at Lewis Research Center. Recent activities have focused on extending the methods to include the combined uncertainties in several factors on structural response. This paper briefly describes recent progress on composite load spectra models, probabilistic finite element structural analysis, and probabilistic strength degradation modeling. Progress is described in terms of fundamental concepts, computer code development, and representative numerical results.

  1. High Throughput Exposure Prioritization of Chemicals Using a Screening-Level Probabilistic SHEDS-Lite Exposure Model

    EPA Science Inventory

    These novel modeling approaches for screening, evaluating and classifying chemicals based on the potential for biologically-relevant human exposures will inform toxicity testing and prioritization for chemical risk assessment. The new modeling approach is derived from the Stocha...

  2. Representing Causation

    ERIC Educational Resources Information Center

    Wolff, Phillip

    2007-01-01

    The dynamics model, which is based on L. Talmy's (1988) theory of force dynamics, characterizes causation as a pattern of forces and a position vector. In contrast to counterfactual and probabilistic models, the dynamics model naturally distinguishes between different cause-related concepts and explains the induction of causal relationships from…

  3. USEPA SHEDS MODEL: METHODOLOGY FOR EXPOSURE ASSESSMENT FOR WOOD PRESERVATIVES

    EPA Science Inventory

    A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...

  4. A Synthetic Study on the Resolution of 2D Elastic Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Cui, C.; Wang, Y.

    2017-12-01

    Gradient based full waveform inversion is an effective method in seismic study, it makes full use of the information given by seismic records and is capable of providing a more accurate model of the interior of the earth at a relatively low computational cost. However, the strong non-linearity of the problem brings about many difficulties in the assessment of its resolution. Synthetic inversions are therefore helpful before an inversion based on real data is made. Checker-board test is a commonly used method, but it is not always reliable due to the significant difference between a checker-board and the true model. Our study aims to provide a basic understanding of the resolution of 2D elastic inversion by examining three main factors that affect the inversion result respectively: 1. The structural characteristic of the model; 2. The level of similarity between the initial model and the true model; 3. The spacial distribution of sources and receivers. We performed about 150 synthetic inversions to demonstrate how each factor contributes to quality of the result, and compared the inversion results with those achieved by checker-board tests. The study can be a useful reference to assess the resolution of an inversion in addition to regular checker-board tests, or to determine whether the seismic data of a specific region is sufficient for a successful inversion.

  5. Knowledge-based probabilistic representations of branching ratios in chemical networks: The case of dissociative recombinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information.more » As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.« less

  6. Knowledge-based probabilistic representations of branching ratios in chemical networks: the case of dissociative recombinations.

    PubMed

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    2010-10-07

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.

  7. The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals: Residential Module Version 4: User Guide, June 2012

    EPA Pesticide Factsheets

    SHEDS - Multimedia is EPA's premier physically-based, probabilistic model, that can simulate cumulative or aggregate exposures for a population across a variety of multimedia, multipathway environmental chemicals.

  8. The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals: Residential Module Version 4: Technical Manual, May 2012

    EPA Pesticide Factsheets

    SHEDS - Multimedia is EPA's premier physically-based, probabilistic model, that can simulate cumulative or aggregate exposures for a population across a variety of multimedia, multipathway environmental chemicals.

  9. The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals: Dietary Module Version 1: User Guide, June 2012

    EPA Pesticide Factsheets

    SHEDS - Multimedia is EPA's premier physically-based, probabilistic model, that can simulate cumulative or aggregate exposures for a population across a variety of multimedia, multipathway environmental chemicals.

  10. Probabilistic Structural Health Monitoring of the Orbiter Wing Leading Edge

    NASA Technical Reports Server (NTRS)

    Yap, Keng C.; Macias, Jesus; Kaouk, Mohamed; Gafka, Tammy L.; Kerr, Justin H.

    2011-01-01

    A structural health monitoring (SHM) system can contribute to the risk management of a structure operating under hazardous conditions. An example is the Wing Leading Edge Impact Detection System (WLEIDS) that monitors the debris hazards to the Space Shuttle Orbiter s Reinforced Carbon-Carbon (RCC) panels. Since Return-to-Flight (RTF) after the Columbia accident, WLEIDS was developed and subsequently deployed on board the Orbiter to detect ascent and on-orbit debris impacts, so as to support the assessment of wing leading edge structural integrity prior to Orbiter re-entry. As SHM is inherently an inverse problem, the analyses involved, including those performed for WLEIDS, tend to be associated with significant uncertainty. The use of probabilistic approaches to handle the uncertainty has resulted in the successful implementation of many development and application milestones.

  11. Surrogate modeling of joint flood risk across coastal watersheds

    NASA Astrophysics Data System (ADS)

    Bass, Benjamin; Bedient, Philip

    2018-03-01

    This study discusses the development and performance of a rapid prediction system capable of representing the joint rainfall-runoff and storm surge flood response of tropical cyclones (TCs) for probabilistic risk analysis. Due to the computational demand required for accurately representing storm surge with the high-fidelity ADvanced CIRCulation (ADCIRC) hydrodynamic model and its coupling with additional numerical models to represent rainfall-runoff, a surrogate or statistical model was trained to represent the relationship between hurricane wind- and pressure-field characteristics and their peak joint flood response typically determined from physics based numerical models. This builds upon past studies that have only evaluated surrogate models for predicting peak surge, and provides the first system capable of probabilistically representing joint flood levels from TCs. The utility of this joint flood prediction system is then demonstrated by improving upon probabilistic TC flood risk products, which currently account for storm surge but do not take into account TC associated rainfall-runoff. Results demonstrate the source apportionment of rainfall-runoff versus storm surge and highlight that slight increases in flood risk levels may occur due to the interaction between rainfall-runoff and storm surge as compared to the Federal Emergency Management Association's (FEMAs) current practices.

  12. Predictability of short-range forecasting: a multimodel approach

    NASA Astrophysics Data System (ADS)

    García-Moya, Jose-Antonio; Callado, Alfons; Escribà, Pau; Santos, Carlos; Santos-Muñoz, Daniel; Simarro, Juan

    2011-05-01

    Numerical weather prediction (NWP) models (including mesoscale) have limitations when it comes to dealing with severe weather events because extreme weather is highly unpredictable, even in the short range. A probabilistic forecast based on an ensemble of slightly different model runs may help to address this issue. Among other ensemble techniques, Multimodel ensemble prediction systems (EPSs) are proving to be useful for adding probabilistic value to mesoscale deterministic models. A Multimodel Short Range Ensemble Prediction System (SREPS) focused on forecasting the weather up to 72 h has been developed at the Spanish Meteorological Service (AEMET). The system uses five different limited area models (LAMs), namely HIRLAM (HIRLAM Consortium), HRM (DWD), the UM (UKMO), MM5 (PSU/NCAR) and COSMO (COSMO Consortium). These models run with initial and boundary conditions provided by five different global deterministic models, namely IFS (ECMWF), UM (UKMO), GME (DWD), GFS (NCEP) and CMC (MSC). AEMET-SREPS (AE) validation on the large-scale flow, using ECMWF analysis, shows a consistent and slightly underdispersive system. For surface parameters, the system shows high skill forecasting binary events. 24-h precipitation probabilistic forecasts are verified using an up-scaling grid of observations from European high-resolution precipitation networks, and compared with ECMWF-EPS (EC).

  13. Integrated Risk-Informed Decision-Making for an ALMR PRISM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhlheim, Michael David; Belles, Randy; Denning, Richard S.

    Decision-making is the process of identifying decision alternatives, assessing those alternatives based on predefined metrics, selecting an alternative (i.e., making a decision), and then implementing that alternative. The generation of decisions requires a structured, coherent process, or a decision-making process. The overall objective for this work is that the generalized framework is adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace ormore » supplement human decision makers with reconfigurable decision-making modules that can perform a given set of tasks rationally, consistently, and reliably. Risk-informed decision-making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The probabilistic portion of the decision-making engine of the supervisory control system is based on the control actions associated with an ALMR PRISM. Newly incorporated into the probabilistic models are the prognostic/diagnostic models developed by Pacific Northwest National Laboratory. These allow decisions to incorporate the health of components into the decision–making process. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic portion of the decision-making engine uses thermal-hydraulic modeling and components for an advanced liquid-metal reactor Power Reactor Inherently Safe Module. The deterministic multi-attribute decision-making framework uses various sensor data (e.g., reactor outlet temperature, steam generator drum level) and calculates its position within the challenge state, its trajectory, and its margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. The metrics that are evaluated are based on reactor trip set points. The integration of the deterministic calculations using multi-physics analyses and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermalhydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies, and developing a user interface to mimic display panels at a modern nuclear power plant.« less

  14. Probabilistic Modeling of Aircraft Trajectories for Dynamic Separation Volumes

    NASA Technical Reports Server (NTRS)

    Lewis, Timothy A.

    2016-01-01

    With a proliferation of new and unconventional vehicles and operations expected in the future, the ab initio airspace design will require new approaches to trajectory prediction for separation assurance and other air traffic management functions. This paper presents an approach to probabilistic modeling of the trajectory of an aircraft when its intent is unknown. The approach uses a set of feature functions to constrain a maximum entropy probability distribution based on a set of observed aircraft trajectories. This model can be used to sample new aircraft trajectories to form an ensemble reflecting the variability in an aircraft's intent. The model learning process ensures that the variability in this ensemble reflects the behavior observed in the original data set. Computational examples are presented.

  15. Estimating rates of local extinction and colonization in colonial species and an extension to the metapopulation and community levels

    USGS Publications Warehouse

    Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.

    2003-01-01

    Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.

  16. Finite-fault source inversion using adjoint methods in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-04-01

    Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  17. Finite-fault source inversion using adjoint methods in 3-D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-07-01

    Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  18. Global integrated drought monitoring and prediction system

    PubMed Central

    Hao, Zengchao; AghaKouchak, Amir; Nakhjiri, Navid; Farahmand, Alireza

    2014-01-01

    Drought is by far the most costly natural disaster that can lead to widespread impacts, including water and food crises. Here we present data sets available from the Global Integrated Drought Monitoring and Prediction System (GIDMaPS), which provides drought information based on multiple drought indicators. The system provides meteorological and agricultural drought information based on multiple satellite-, and model-based precipitation and soil moisture data sets. GIDMaPS includes a near real-time monitoring component and a seasonal probabilistic prediction module. The data sets include historical drought severity data from the monitoring component, and probabilistic seasonal forecasts from the prediction module. The probabilistic forecasts provide essential information for early warning, taking preventive measures, and planning mitigation strategies. GIDMaPS data sets are a significant extension to current capabilities and data sets for global drought assessment and early warning. The presented data sets would be instrumental in reducing drought impacts especially in developing countries. Our results indicate that GIDMaPS data sets reliably captured several major droughts from across the globe. PMID:25977759

  19. Global integrated drought monitoring and prediction system.

    PubMed

    Hao, Zengchao; AghaKouchak, Amir; Nakhjiri, Navid; Farahmand, Alireza

    2014-01-01

    Drought is by far the most costly natural disaster that can lead to widespread impacts, including water and food crises. Here we present data sets available from the Global Integrated Drought Monitoring and Prediction System (GIDMaPS), which provides drought information based on multiple drought indicators. The system provides meteorological and agricultural drought information based on multiple satellite-, and model-based precipitation and soil moisture data sets. GIDMaPS includes a near real-time monitoring component and a seasonal probabilistic prediction module. The data sets include historical drought severity data from the monitoring component, and probabilistic seasonal forecasts from the prediction module. The probabilistic forecasts provide essential information for early warning, taking preventive measures, and planning mitigation strategies. GIDMaPS data sets are a significant extension to current capabilities and data sets for global drought assessment and early warning. The presented data sets would be instrumental in reducing drought impacts especially in developing countries. Our results indicate that GIDMaPS data sets reliably captured several major droughts from across the globe.

  20. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less

  1. Full waveform inversion using envelope-based global correlation norm

    NASA Astrophysics Data System (ADS)

    Oh, Ju-Won; Alkhalifah, Tariq

    2018-05-01

    To increase the feasibility of full waveform inversion on real data, we suggest a new objective function, which is defined as the global correlation of the envelopes of modelled and observed data. The envelope-based global correlation norm has the advantage of the envelope inversion that generates artificial low-frequency information, which provides the possibility to recover long-wavelength structure in an early stage. In addition, the envelope-based global correlation norm maintains the advantage of the global correlation norm, which reduces the sensitivity of the misfit to amplitude errors so that the performance of inversion on real data can be enhanced when the exact source wavelet is not available and more complex physics are ignored. Through the synthetic example for 2-D SEG/EAGE overthrust model with inaccurate source wavelet, we compare the performance of four different approaches, which are the least-squares waveform inversion, least-squares envelope inversion, global correlation norm and envelope-based global correlation norm. Finally, we apply the envelope-based global correlation norm on the 3-D Ocean Bottom Cable (OBC) data from the North Sea. The envelope-based global correlation norm captures the strong reflections from the high-velocity caprock and generates artificial low-frequency reflection energy that helps us recover long-wavelength structure of the model domain in the early stages. From this long-wavelength model, the conventional global correlation norm is sequentially applied to invert for higher-resolution features of the model.

  2. Application of Probabilistic Analysis to Aircraft Impact Dynamics

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.

    2003-01-01

    Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.

  3. Enhancing Retrieval with Hyperlinks: A General Model Based on Propositional Argumentation Systems.

    ERIC Educational Resources Information Center

    Picard, Justin; Savoy, Jacques

    2003-01-01

    Discusses the use of hyperlinks for improving information retrieval on the World Wide Web and proposes a general model for using hyperlinks based on Probabilistic Argumentation Systems. Topics include propositional logic, knowledge, and uncertainty; assumptions; using hyperlinks to modify document score and rank; and estimating the popularity of a…

  4. SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION

    EPA Science Inventory

    An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH3) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...

  5. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  6. Model based inference from microvascular measurements: Combining experimental measurements and model predictions using a Bayesian probabilistic approach

    PubMed Central

    Rasmussen, Peter M.; Smith, Amy F.; Sakadžić, Sava; Boas, David A.; Pries, Axel R.; Secomb, Timothy W.; Østergaard, Leif

    2017-01-01

    Objective In vivo imaging of the microcirculation and network-oriented modeling have emerged as powerful means of studying microvascular function and understanding its physiological significance. Network-oriented modeling may provide the means of summarizing vast amounts of data produced by high-throughput imaging techniques in terms of key, physiological indices. To estimate such indices with sufficient certainty, however, network-oriented analysis must be robust to the inevitable presence of uncertainty due to measurement errors as well as model errors. Methods We propose the Bayesian probabilistic data analysis framework as a means of integrating experimental measurements and network model simulations into a combined and statistically coherent analysis. The framework naturally handles noisy measurements and provides posterior distributions of model parameters as well as physiological indices associated with uncertainty. Results We applied the analysis framework to experimental data from three rat mesentery networks and one mouse brain cortex network. We inferred distributions for more than five hundred unknown pressure and hematocrit boundary conditions. Model predictions were consistent with previous analyses, and remained robust when measurements were omitted from model calibration. Conclusion Our Bayesian probabilistic approach may be suitable for optimizing data acquisition and for analyzing and reporting large datasets acquired as part of microvascular imaging studies. PMID:27987383

  7. Control of Stochastic Master Equation Models of Genetic Regulatory Networks by Approximating Their Average Behavior

    NASA Astrophysics Data System (ADS)

    Umut Caglar, Mehmet; Pal, Ranadip

    2010-10-01

    The central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid.'' However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of data in the cellular level and probabilistic nature of interactions. Probabilistic models like Stochastic Master Equation (SME) or deterministic models like differential equations (DE) can be used to analyze these types of interactions. SME models based on chemical master equation (CME) can provide detailed representation of genetic regulatory system, but their use is restricted by the large data requirements and computational costs of calculations. The differential equations models on the other hand, have low calculation costs and much more adequate to generate control procedures on the system; but they are not adequate to investigate the probabilistic nature of interactions. In this work the success of the mapping between SME and DE is analyzed, and the success of a control policy generated by DE model with respect to SME model is examined. Index Terms--- Stochastic Master Equation models, Differential Equation Models, Control Policy Design, Systems biology

  8. A geomorphic approach to 100-year floodplain mapping for the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Jafarzadegan, Keighobad; Merwade, Venkatesh; Saksena, Siddharth

    2018-06-01

    Floodplain mapping using hydrodynamic models is difficult in data scarce regions. Additionally, using hydrodynamic models to map floodplain over large stream network can be computationally challenging. Some of these limitations of floodplain mapping using hydrodynamic modeling can be overcome by developing computationally efficient statistical methods to identify floodplains in large and ungauged watersheds using publicly available data. This paper proposes a geomorphic model to generate probabilistic 100-year floodplain maps for the Conterminous United States (CONUS). The proposed model first categorizes the watersheds in the CONUS into three classes based on the height of the water surface corresponding to the 100-year flood from the streambed. Next, the probability that any watershed in the CONUS belongs to one of these three classes is computed through supervised classification using watershed characteristics related to topography, hydrography, land use and climate. The result of this classification is then fed into a probabilistic threshold binary classifier (PTBC) to generate the probabilistic 100-year floodplain maps. The supervised classification algorithm is trained by using the 100-year Flood Insurance Rated Maps (FIRM) from the U.S. Federal Emergency Management Agency (FEMA). FEMA FIRMs are also used to validate the performance of the proposed model in areas not included in the training. Additionally, HEC-RAS model generated flood inundation extents are used to validate the model performance at fifteen sites that lack FEMA maps. Validation results show that the probabilistic 100-year floodplain maps, generated by proposed model, match well with both FEMA and HEC-RAS generated maps. On average, the error of predicted flood extents is around 14% across the CONUS. The high accuracy of the validation results shows the reliability of the geomorphic model as an alternative approach for fast and cost effective delineation of 100-year floodplains for the CONUS.

  9. Simultaneous inversion of seismic velocity and moment tensor using elastic-waveform inversion of microseismic data: Application to the Aneth CO2-EOR field

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Huang, L.

    2017-12-01

    Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion.

  10. Analysis of the French insurance market exposure to floods: a stochastic model combining river overflow and surface runoff

    NASA Astrophysics Data System (ADS)

    Moncoulon, D.; Labat, D.; Ardon, J.; Onfroy, T.; Leblois, E.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.

    2013-07-01

    The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible but not yet occurred flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2012 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90% of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of CCR claim database has shown that approximately 45% of the insured flood losses are located inside the floodplains and 45% outside. 10% other percent are due to seasurge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: generation of fictive river flows based on the historical records of the river gauge network and generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (MACIF) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).

  11. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  12. Long-range hazard assessment of volcanic ash dispersal for a Plinian eruptive scenario at Popocatépetl volcano (Mexico): implications for civil aviation safety

    USGS Publications Warehouse

    Bonasia, Rosanna; Scaini, Chirara; Capra, Lucia; Nathenson, Manuel; Siebe, Claus; Arana-Salinas, Lilia; Folch, Arnau

    2013-01-01

    Popocatépetl is one of Mexico’s most active volcanoes threatening a densely populated area that includes Mexico City with more than 20 million inhabitants. The destructive potential of this volcano is demonstrated by its Late Pleistocene–Holocene eruptive activity, which has been characterized by recurrent Plinian eruptions of large magnitude, the last two of which destroyed human settlements in pre-Hispanic times. Popocatépetl’s reawakening in 1994 produced a crisis that culminated with the evacuation of two villages on the northeastern flank of the volcano. Shortly after, a monitoring system and a civil protection contingency plan based on a hazard zone map were implemented. The current volcanic hazards map considers the potential occurrence of different volcanic phenomena, including pyroclastic density currents and lahars. However, no quantitative assessment of the tephra hazard, especially related to atmospheric dispersal, has been performed. The presence of airborne volcanic ash at low and jet-cruise atmospheric levels compromises the safety of aircraft operations and forces re-routing of aircraft to prevent encounters with volcanic ash clouds. Given the high number of important airports in the surroundings of Popocatépetl volcano and considering the potential threat posed to civil aviation in Mexico and adjacent regions in case of a Plinian eruption, a hazard assessment for tephra dispersal is required. In this work, we present the first probabilistic tephra dispersal hazard assessment for Popocatépetl volcano. We compute probabilistic hazard maps for critical thresholds of airborne ash concentrations at different flight levels, corresponding to the situation defined in Europe during 2010, and still under discussion. Tephra dispersal mode is performed using the FALL3D numerical model. Probabilistic hazard maps are built for a Plinian eruptive scenario defined on the basis of geological field data for the “Ochre Pumice” Plinian eruption (4965 14C yr BP). FALL3D model input eruptive parameters are constrained through an inversion method carried out with the semi-analytical HAZMAP model and are varied by sampling them using probability density functions. We analyze the influence of seasonal variations on ash dispersal and estimate the average persistence of critical ash concentrations at relevant locations and airports. This study assesses the impact that a Plinian eruption similar to the Ochre Pumice eruption would have on the main airports of Mexico and adjacent areas. The hazard maps presented here can support long-term planning that would help minimize the impacts of such an eruption on civil aviation.

  13. Long-range hazard assessment of volcanic ash dispersal for a Plinian eruptive scenario at Popocatépetl volcano (Mexico): implications for civil aviation safety

    NASA Astrophysics Data System (ADS)

    Bonasia, Rosanna; Scaini, Chiara; Capra, Lucia; Nathenson, Manuel; Siebe, Claus; Arana-Salinas, Lilia; Folch, Arnau

    2014-01-01

    Popocatépetl is one of Mexico's most active volcanoes threatening a densely populated area that includes Mexico City with more than 20 million inhabitants. The destructive potential of this volcano is demonstrated by its Late Pleistocene-Holocene eruptive activity, which has been characterized by recurrent Plinian eruptions of large magnitude, the last two of which destroyed human settlements in pre-Hispanic times. Popocatépetl's reawakening in 1994 produced a crisis that culminated with the evacuation of two villages on the northeastern flank of the volcano. Shortly after, a monitoring system and a civil protection contingency plan based on a hazard zone map were implemented. The current volcanic hazards map considers the potential occurrence of different volcanic phenomena, including pyroclastic density currents and lahars. However, no quantitative assessment of the tephra hazard, especially related to atmospheric dispersal, has been performed. The presence of airborne volcanic ash at low and jet-cruise atmospheric levels compromises the safety of aircraft operations and forces re-routing of aircraft to prevent encounters with volcanic ash clouds. Given the high number of important airports in the surroundings of Popocatépetl volcano and considering the potential threat posed to civil aviation in Mexico and adjacent regions in case of a Plinian eruption, a hazard assessment for tephra dispersal is required. In this work, we present the first probabilistic tephra dispersal hazard assessment for Popocatépetl volcano. We compute probabilistic hazard maps for critical thresholds of airborne ash concentrations at different flight levels, corresponding to the situation defined in Europe during 2010, and still under discussion. Tephra dispersal mode is performed using the FALL3D numerical model. Probabilistic hazard maps are built for a Plinian eruptive scenario defined on the basis of geological field data for the "Ochre Pumice" Plinian eruption (4965 14C yr BP). FALL3D model input eruptive parameters are constrained through an inversion method carried out with the semi-analytical HAZMAP model and are varied by sampling them using probability density functions. We analyze the influence of seasonal variations on ash dispersal and estimate the average persistence of critical ash concentrations at relevant locations and airports. This study assesses the impact that a Plinian eruption similar to the Ochre Pumice eruption would have on the main airports of Mexico and adjacent areas. The hazard maps presented here can support long-term planning that would help minimize the impacts of such an eruption on civil aviation.

  14. Ultra-low velocity zones beneath the Philippine and Tasman Seas revealed by a trans-dimensional Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Pachhai, Surya; Dettmer, Jan; Tkalčić, Hrvoje

    2015-11-01

    Ultra-low velocity zones (ULVZs) are small-scale structures in the Earth's lowermost mantle inferred from the analysis of seismological observations. These structures exhibit a strong decrease in compressional (P)-wave velocity, shear (S)-wave velocity, and an increase in density. Quantifying the elastic properties of ULVZs is crucial for understanding their physical origin, which has been hypothesized either as partial melting, iron enrichment, or a combination of the two. Possible disambiguation of these hypotheses can lead to a better understanding of the dynamic processes of the lowermost mantle, such as, percolation, stirring and thermochemical convection. To date, ULVZs have been predominantly studied by forward waveform modelling of seismic waves that sample the core-mantle boundary region. However, ULVZ parameters (i.e. velocity, density, and vertical and lateral extent) obtained through forward modelling are poorly constrained because inferring Earth structure from seismic observations is a non-linear inverse problem with inherent non-uniqueness. To address these issues, we developed a trans-dimensional hierarchical Bayesian inversion that enables rigorous estimation of ULVZ parameter values and their uncertainties, including the effects of model selection. The model selection includes treating the number of layers and the vertical extent of the ULVZ as unknowns. The posterior probability density (solution to the inverse problem) of the ULVZ parameters is estimated by reversible jump Markov chain Monte Carlo sampling that employs parallel tempering to improve efficiency/convergence. First, we apply our method to study the resolution of complex ULVZ structure (including gradually varying structure) by probabilistically inverting simulated noisy waveforms. Then, two data sets sampling the CMB beneath the Philippine and Tasman Seas are considered in the inversion. Our results indicate that both ULVZs are more complex than previously suggested. For the Philippine Sea data, we find a strong decrease in S-wave velocity, which indicates the presence of iron-rich material, albeit this result is accompanied with larger parameter uncertainties than in a previous study. For the Tasman Sea data, our analysis yields a well-constrained S-wave velocity that gradually decreases with depth. We conclude that this ULVZ represents a partial melt of iron-enriched material with higher melt content near its bottom.

  15. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  16. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  17. The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals: Dietary Module Version 1: Quick Start Guide, May 2012

    EPA Pesticide Factsheets

    SHEDS - Multimedia is EPA's premier physically-based, probabilistic model, that can simulate cumulative or aggregate exposures for a population across a variety of multimedia, multipathway environmental chemicals.

  18. The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals: Residential Module Version 4: Quick Start Guide, April 2012

    EPA Pesticide Factsheets

    SHEDS - Multimedia is EPA's premier physically-based, probabilistic model, that can simulate cumulative or aggregate exposures for a population across a variety of multimedia, multipathway environmental chemicals.

  19. Probabilistic machine learning and artificial intelligence.

    PubMed

    Ghahramani, Zoubin

    2015-05-28

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  20. Probabilistic machine learning and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Ghahramani, Zoubin

    2015-05-01

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  1. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  2. Using a probabilistic approach in an ecological risk assessment simulation tool: test case for depleted uranium (DU).

    PubMed

    Fan, Ming; Thongsri, Tepwitoon; Axe, Lisa; Tyson, Trevor A

    2005-06-01

    A probabilistic approach was applied in an ecological risk assessment (ERA) to characterize risk and address uncertainty employing Monte Carlo simulations for assessing parameter and risk probabilistic distributions. This simulation tool (ERA) includes a Window's based interface, an interactive and modifiable database management system (DBMS) that addresses a food web at trophic levels, and a comprehensive evaluation of exposure pathways. To illustrate this model, ecological risks from depleted uranium (DU) exposure at the US Army Yuma Proving Ground (YPG) and Aberdeen Proving Ground (APG) were assessed and characterized. Probabilistic distributions showed that at YPG, a reduction in plant root weight is considered likely to occur (98% likelihood) from exposure to DU; for most terrestrial animals, likelihood for adverse reproduction effects ranges from 0.1% to 44%. However, for the lesser long-nosed bat, the effects are expected to occur (>99% likelihood) through the reduction in size and weight of offspring. Based on available DU data for the firing range at APG, DU uptake will not likely affect survival of aquatic plants and animals (<0.1% likelihood). Based on field and laboratory studies conducted at APG and YPG on pocket mice, kangaroo rat, white-throated woodrat, deer, and milfoil, body burden concentrations observed fall into the distributions simulated at both sites.

  3. The MIT Integrated Global System Model: A facility for Assessing and Communicating Climate Change Uncertainty (Invited)

    NASA Astrophysics Data System (ADS)

    Prinn, R. G.

    2013-12-01

    The world is facing major challenges that create tensions between human development and environmental sustenance. In facing these challenges, computer models are invaluable tools for addressing the need for probabilistic approaches to forecasting. To illustrate this, I use the MIT Integrated Global System Model framework (IGSM; http://globalchange.mit.edu ). The IGSM consists of a set of coupled sub-models of global economic and technological development and resultant emissions, and physical, dynamical and chemical processes in the atmosphere, land, ocean and ecosystems (natural and managed). Some of the sub-models have both complex and simplified versions available, with the choice of which version to use being guided by the questions being addressed. Some sub-models (e.g.urban air pollution) are reduced forms of complex ones created by probabilistic collocation with polynomial chaos bases. Given the significant uncertainties in the model components, it is highly desirable that forecasts be probabilistic. We achieve this by running 400-member ensembles (Latin hypercube sampling) with different choices for key uncertain variables and processes within the human and natural system model components (pdfs of inputs estimated by model-observation comparisons, literature surveys, or expert elicitation). The IGSM has recently been used for probabilistic forecasts of climate, each using 400-member ensembles: one ensemble assumes no explicit climate mitigation policy and others assume increasingly stringent policies involving stabilization of greenhouse gases at various levels. These forecasts indicate clearly that the greatest effect of these policies is to lower the probability of extreme changes. The value of such probability analyses for policy decision-making lies in their ability to compare relative (not just absolute) risks of various policies, which are less affected by the earth system model uncertainties. Given the uncertainties in forecasts, it is also clear that we need to evaluate policies based on their ability to lower risk, and to re-evaluate decisions over time as new knowledge is gained. Reference: R. G. Prinn, Development and Application of Earth System Models, Proceedings, National Academy of Science, June 15, 2012, http://www.pnas.org/cgi/doi/10.1073/pnas.1107470109.

  4. Is Probabilistic Evidence a Source of Knowledge?

    ERIC Educational Resources Information Center

    Friedman, Ori; Turri, John

    2015-01-01

    We report a series of experiments examining whether people ascribe knowledge for true beliefs based on probabilistic evidence. Participants were less likely to ascribe knowledge for beliefs based on probabilistic evidence than for beliefs based on perceptual evidence (Experiments 1 and 2A) or testimony providing causal information (Experiment 2B).…

  5. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1986-01-01

    A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.

  6. E-Area LLWF Vadose Zone Model: Probabilistic Model for Estimating Subsided-Area Infiltration Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyer, J.; Flach, G.

    A probabilistic model employing a Monte Carlo sampling technique was developed in Python to generate statistical distributions of the upslope-intact-area to subsided-area ratio (Area UAi/Area SAi) for closure cap subsidence scenarios that differ in assumed percent subsidence and the total number of intact plus subsided compartments. The plan is to use this model as a component in the probabilistic system model for the E-Area Performance Assessment (PA), contributing uncertainty in infiltration estimates.

  7. Probabilistic versus deterministic skill in predicting the western North Pacific-East Asian summer monsoon variability with multimodel ensembles

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Qun; Yang, Dejian; Xie, Qian; Zhang, Yaocun; Ren, Xuejuan; Tang, Youmin

    2017-04-01

    Based on historical forecasts of three quasi-operational multi-model ensemble (MME) systems, this study assesses the superiority of coupled MME over contributing single-model ensembles (SMEs) and over uncoupled atmospheric MME in predicting the Western North Pacific-East Asian summer monsoon variability. The probabilistic and deterministic forecast skills are measured by Brier skill score (BSS) and anomaly correlation (AC), respectively. A forecast-format dependent MME superiority over SMEs is found. The probabilistic forecast skill of the MME is always significantly better than that of each SME, while the deterministic forecast skill of the MME can be lower than that of some SMEs. The MME superiority arises from both the model diversity and the ensemble size increase in the tropics, and primarily from the ensemble size increase in the subtropics. The BSS is composed of reliability and resolution, two attributes characterizing probabilistic forecast skill. The probabilistic skill increase of the MME is dominated by the dramatic improvement in reliability, while resolution is not always improved, similar to AC. A monotonic resolution-AC relationship is further found and qualitatively explained, whereas little relationship can be identified between reliability and AC. It is argued that the MME's success in improving the reliability arises from an effective reduction of the overconfidence in forecast distributions. Moreover, it is examined that the seasonal predictions with coupled MME are more skillful than those with the uncoupled atmospheric MME forced by persisting sea surface temperature (SST) anomalies, since the coupled MME has better predicted the SST anomaly evolution in three key regions.

  8. Bayesian Networks Improve Causal Environmental Assessments for Evidence-Based Policy.

    PubMed

    Carriger, John F; Barron, Mace G; Newman, Michael C

    2016-12-20

    Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on valued ecological resources. These aspects are demonstrated through hypothetical problem scenarios that explore some major benefits of using Bayesian networks for reasoning and making inferences in evidence-based policy.

  9. Inversion of Density Interfaces Using the Pseudo-Backpropagation Neural Network Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaohong; Du, Yukun; Liu, Zhan; Zhao, Wenju; Chen, Xiaocheng

    2018-05-01

    This paper presents a new pseudo-backpropagation (BP) neural network method that can invert multi-density interfaces at one time. The new method is based on the conventional forward modeling and inverse modeling theories in addition to conventional pseudo-BP neural network arithmetic. A 3D inversion model for gravity anomalies of multi-density interfaces using the pseudo-BP neural network method is constructed after analyzing the structure and function of the artificial neural network. The corresponding iterative inverse formula of the space field is presented at the same time. Based on trials of gravity anomalies and density noise, the influence of the two kinds of noise on the inverse result is discussed and the scale of noise requested for the stability of the arithmetic is analyzed. The effects of the initial model on the reduction of the ambiguity of the result and improvement of the precision of inversion are discussed. The correctness and validity of the method were verified by the 3D model of the three interfaces. 3D inversion was performed on the observed gravity anomaly data of the Okinawa trough using the program presented herein. The Tertiary basement and Moho depth were obtained from the inversion results, which also testify the adaptability of the method. This study has made a useful attempt for the inversion of gravity density interfaces.

  10. Functionally Graded Designer Viscoelastic Materials Tailored to Perform Prescribed Tasks with Probabilistic Failures and Lifetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilton, Harry H.

    Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.

  11. Applications of Probabilistic Combiners on Linear Feedback Shift Register Sequences

    DTIC Science & Technology

    2016-12-01

    on the resulting output strings show a drastic increase in complexity, while simultaneously passing the stringent randomness tests required by the...a three-variable function. Our tests on the resulting output strings show a drastic increase in complex- ity, while simultaneously passing the...10001101 01000010 11101001 Decryption of a message that has been encrypted using bitwise XOR is quite simple. Since each bit is its own additive inverse

  12. An object-oriented approach to risk and reliability analysis : methodology and aviation safety applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane

    2003-09-01

    This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less

  13. Localization of the lumbar discs using machine learning and exact probabilistic inference.

    PubMed

    Oktay, Ayse Betul; Akgul, Yusuf Sinan

    2011-01-01

    We propose a novel fully automatic approach to localize the lumbar intervertebral discs in MR images with PHOG based SVM and a probabilistic graphical model. At the local level, our method assigns a score to each pixel in target image that indicates whether it is a disc center or not. At the global level, we define a chain-like graphical model that represents the lumbar intervertebral discs and we use an exact inference algorithm to localize the discs. Our main contributions are the employment of the SVM with the PHOG based descriptor which is robust against variations of the discs and a graphical model that reflects the linear nature of the vertebral column. Our inference algorithm runs in polynomial time and produces globally optimal results. The developed system is validated on a real spine MRI dataset and the final localization results are favorable compared to the results reported in the literature.

  14. Evaluating probabilistic dengue risk forecasts from a prototype early warning system for Brazil.

    PubMed

    Lowe, Rachel; Coelho, Caio As; Barcellos, Christovam; Carvalho, Marilia Sá; Catão, Rafael De Castro; Coelho, Giovanini E; Ramalho, Walter Massa; Bailey, Trevor C; Stephenson, David B; Rodó, Xavier

    2016-02-24

    Recently, a prototype dengue early warning system was developed to produce probabilistic forecasts of dengue risk three months ahead of the 2014 World Cup in Brazil. Here, we evaluate the categorical dengue forecasts across all microregions in Brazil, using dengue cases reported in June 2014 to validate the model. We also compare the forecast model framework to a null model, based on seasonal averages of previously observed dengue incidence. When considering the ability of the two models to predict high dengue risk across Brazil, the forecast model produced more hits and fewer missed events than the null model, with a hit rate of 57% for the forecast model compared to 33% for the null model. This early warning model framework may be useful to public health services, not only ahead of mass gatherings, but also before the peak dengue season each year, to control potentially explosive dengue epidemics.

  15. Assessment of flood susceptible areas using spatially explicit, probabilistic multi-criteria decision analysis

    NASA Astrophysics Data System (ADS)

    Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan

    2018-03-01

    GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.

  16. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  17. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  18. Risk-Informed Safety Assurance and Probabilistic Assessment of Mission-Critical Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Guarro, Sergio B.

    2010-01-01

    This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.

  19. Probabilistic arithmetic automata and their applications.

    PubMed

    Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven

    2012-01-01

    We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.

  20. Identifying influences on model uncertainty: an application using a forest carbon budget model

    Treesearch

    James E. Smith; Linda S. Heath

    2001-01-01

    Uncertainty is an important consideration for both developers and users of environmental simulation models. Establishing quantitative estimates of uncertainty for deterministic models can be difficult when the underlying bases for such information are scarce. We demonstrate an application of probabilistic uncertainty analysis that provides for refinements in...

  1. Logical Reasoning versus Information Processing in the Dual-Strategy Model of Reasoning

    ERIC Educational Resources Information Center

    Markovits, Henry; Brisson, Janie; de Chantal, Pier-Luc

    2017-01-01

    One of the major debates concerning the nature of inferential reasoning is between counterexample-based strategies such as mental model theory and statistical strategies underlying probabilistic models. The dual-strategy model, proposed by Verschueren, Schaeken, & d'Ydewalle (2005a, 2005b), which suggests that people might have access to both…

  2. Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Edward V.; Lewis, John. R.; Anderson-Cook, Christine Michaela

    The inverse prediction is important in a variety of scientific and engineering applications, such as to predict properties/characteristics of an object by using multiple measurements obtained from it. Inverse prediction can be accomplished by inverting parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are computational/science based; but often, forward models are empirically based response surface models, obtained by using the results of controlled experimentation. For empirical models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). And while nature dictatesmore » the causal relationships between factors and responses, experimenters can control the complexity, accuracy, and precision of forward models constructed via selection of factors, factor levels, and the set of trials that are performed. Recognition of the uncertainty in the estimated forward models leads to an errors-in-variables approach for inverse prediction. The forward models (estimated by experiments or science based) can also be used to analyze how well candidate responses complement one another for inverse prediction over the range of the factor space of interest. Furthermore, one may find that some responses are complementary, redundant, or noninformative. Simple analysis and examples illustrate how an informative and discriminating subset of responses could be selected among candidates in cases where the number of responses that can be acquired during inverse prediction is limited by difficulty, expense, and/or availability of material.« less

  3. Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction

    DOE PAGES

    Thomas, Edward V.; Lewis, John. R.; Anderson-Cook, Christine Michaela; ...

    2017-07-01

    The inverse prediction is important in a variety of scientific and engineering applications, such as to predict properties/characteristics of an object by using multiple measurements obtained from it. Inverse prediction can be accomplished by inverting parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are computational/science based; but often, forward models are empirically based response surface models, obtained by using the results of controlled experimentation. For empirical models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). And while nature dictatesmore » the causal relationships between factors and responses, experimenters can control the complexity, accuracy, and precision of forward models constructed via selection of factors, factor levels, and the set of trials that are performed. Recognition of the uncertainty in the estimated forward models leads to an errors-in-variables approach for inverse prediction. The forward models (estimated by experiments or science based) can also be used to analyze how well candidate responses complement one another for inverse prediction over the range of the factor space of interest. Furthermore, one may find that some responses are complementary, redundant, or noninformative. Simple analysis and examples illustrate how an informative and discriminating subset of responses could be selected among candidates in cases where the number of responses that can be acquired during inverse prediction is limited by difficulty, expense, and/or availability of material.« less

  4. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  5. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review.

    PubMed

    McClelland, James L

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.

  6. A probabilistic tornado wind hazard model for the continental United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossain, Q; Kimball, J; Mensing, R

    A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectanglemore » and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.« less

  7. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    PubMed

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.

  8. The Objective Borderline Method: A Probabilistic Method for Standard Setting

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim

    2015-01-01

    A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…

  9. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  10. The virtual enhancements - solar proton event radiation (VESPER) model

    NASA Astrophysics Data System (ADS)

    Aminalragia-Giamini, Sigiava; Sandberg, Ingmar; Papadimitriou, Constantinos; Daglis, Ioannis A.; Jiggens, Piers

    2018-02-01

    A new probabilistic model introducing a novel paradigm for the modelling of the solar proton environment at 1 AU is presented. The virtual enhancements - solar proton event radiation model (VESPER) uses the European space agency's solar energetic particle environment modelling (SEPEM) Reference Dataset and produces virtual time-series of proton differential fluxes. In this regard it fundamentally diverges from the approach of existing SPE models that are based on probabilistic descriptions of SPE macroscopic characteristics such as peak flux and cumulative fluence. It is shown that VESPER reproduces well the dataset characteristics it uses, and further comparisons with existing models are made with respect to their results. The production of time-series as the main output of the model opens a straightforward way for the calculation of solar proton radiation effects in terms of time-series and the pairing with effects caused by trapped radiation and galactic cosmic rays.

  11. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  12. Geostatistical regularization of inverse models for the retrieval of vegetation biophysical variables

    NASA Astrophysics Data System (ADS)

    Atzberger, C.; Richter, K.

    2009-09-01

    The robust and accurate retrieval of vegetation biophysical variables using radiative transfer models (RTM) is seriously hampered by the ill-posedness of the inverse problem. With this research we further develop our previously published (object-based) inversion approach [Atzberger (2004)]. The object-based RTM inversion takes advantage of the geostatistical fact that the biophysical characteristics of nearby pixel are generally more similar than those at a larger distance. A two-step inversion based on PROSPECT+SAIL generated look-up-tables is presented that can be easily implemented and adapted to other radiative transfer models. The approach takes into account the spectral signatures of neighboring pixel and optimizes a common value of the average leaf angle (ALA) for all pixel of a given image object, such as an agricultural field. Using a large set of leaf area index (LAI) measurements (n = 58) acquired over six different crops of the Barrax test site, Spain), we demonstrate that the proposed geostatistical regularization yields in most cases more accurate and spatially consistent results compared to the traditional (pixel-based) inversion. Pros and cons of the approach are discussed and possible future extensions presented.

  13. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  14. Bayesian networks improve causal environmental ...

    EPA Pesticide Factsheets

    Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value

  15. Shear-wave velocity-based probabilistic and deterministic assessment of seismic soil liquefaction potential

    USGS Publications Warehouse

    Kayen, R.; Moss, R.E.S.; Thompson, E.M.; Seed, R.B.; Cetin, K.O.; Der Kiureghian, A.; Tanaka, Y.; Tokimatsu, K.

    2013-01-01

    Shear-wave velocity (Vs) offers a means to determine the seismic resistance of soil to liquefaction by a fundamental soil property. This paper presents the results of an 11-year international project to gather new Vs site data and develop probabilistic correlations for seismic soil liquefaction occurrence. Toward that objective, shear-wave velocity test sites were identified, and measurements made for 301 new liquefaction field case histories in China, Japan, Taiwan, Greece, and the United States over a decade. The majority of these new case histories reoccupy those previously investigated by penetration testing. These new data are combined with previously published case histories to build a global catalog of 422 case histories of Vs liquefaction performance. Bayesian regression and structural reliability methods facilitate a probabilistic treatment of the Vs catalog for performance-based engineering applications. Where possible, uncertainties of the variables comprising both the seismic demand and the soil capacity were estimated and included in the analysis, resulting in greatly reduced overall model uncertainty relative to previous studies. The presented data set and probabilistic analysis also help resolve the ancillary issues of adjustment for soil fines content and magnitude scaling factors.

  16. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  17. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more

    PubMed Central

    Rivas, Elena; Lang, Raymond; Eddy, Sean R.

    2012-01-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  18. 75 FR 13610 - Office of New Reactors; Interim Staff Guidance on Implementation of a Seismic Margin Analysis for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... Staff Guidance on Implementation of a Seismic Margin Analysis for New Reactors Based on Probabilistic... Seismic Margin Analysis for New Reactors Based on Probabilistic Risk Assessment,'' (Agencywide Documents.../COL-ISG-020 ``Implementation of a Seismic Margin Analysis for New Reactors Based on Probabilistic Risk...

  19. Time-lapse ERT interpretation methodology for leachate injection monitoring based on multiple inversions and a clustering strategy (MICS)

    NASA Astrophysics Data System (ADS)

    Audebert, M.; Clément, R.; Touze-Foltz, N.; Günther, T.; Moreau, S.; Duquennoi, C.

    2014-12-01

    Leachate recirculation is a key process in municipal waste landfills functioning as bioreactors. To quantify the water content and to assess the leachate injection system, in-situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). This geophysical method is based on the inversion process, which presents two major problems in terms of delimiting the infiltration area. First, it is difficult for ERT users to choose an appropriate inversion parameter set. Indeed, it might not be sufficient to interpret only the optimum model (i.e. the model with the chosen regularisation strength) because it is not necessarily the model which best represents the physical process studied. Second, it is difficult to delineate the infiltration front based on resistivity models because of the smoothness of the inversion results. This paper proposes a new methodology called MICS (multiple inversions and clustering strategy), which allows ERT users to improve the delimitation of the infiltration area in leachate injection monitoring. The MICS methodology is based on (i) a multiple inversion step by varying the inversion parameter values to take a wide range of resistivity models into account and (ii) a clustering strategy to improve the delineation of the infiltration front. In this paper, MICS was assessed on two types of data. First, a numerical assessment allows us to optimise and test MICS for different infiltration area sizes, contrasts and shapes. Second, MICS was applied to a field data set gathered during leachate recirculation on a bioreactor.

  20. Durability reliability analysis for corroding concrete structures under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Hao

    2018-02-01

    This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.

  1. Feasibility of waveform inversion of Rayleigh waves for shallow shear-wave velocity using a genetic algorithm

    USGS Publications Warehouse

    Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.

    2011-01-01

    Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.

  2. Quantum-like Modeling of Cognition

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2015-09-01

    This paper begins with a historical review of the mutual influence of physics and psychology, from Freud's invention of psychic energy inspired by von Boltzmann' thermodynamics to the enrichment quantum physics gained from the side of psychology by the notion of complementarity (the invention of Niels Bohr who was inspired by William James), besides we consider the resonance of the correspondence between Wolfgang Pauli and Carl Jung in both physics and psychology. Then we turn to the problem of development of mathematical models for laws of thought starting with Boolean logic and progressing towards foundations of classical probability theory. Interestingly, the laws of classical logic and probability are routinely violated not only by quantum statistical phenomena but by cognitive phenomena as well. This is yet another common feature between quantum physics and psychology. In particular, cognitive data can exhibit a kind of the probabilistic interference effect. This similarity with quantum physics convinced a multi-disciplinary group of scientists (physicists, psychologists, economists, sociologists) to apply the mathematical apparatus of quantum mechanics to modeling of cognition. We illustrate this activity by considering a few concrete phenomena: the order and disjunction effects, recognition of ambiguous figures, categorization-decision making. In Appendix 1 we briefly present essentials of theory of contextual probability and a method of representations of contextual probabilities by complex probability amplitudes (solution of the ``inverse Born's problem'') based on a quantum-like representation algorithm (QLRA).

  3. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  4. The influence of air temperature inversions on snowmelt and glacier mass-balance simulations, Ammassalik island, SE Greenland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mernild, Sebastian Haugard; Liston, Glen

    2009-01-01

    In many applications, a realistic description of air temperature inversions is essential for accurate snow and glacier ice melt, and glacier mass-balance simulations. A physically based snow-evolution modeling system (SnowModel) was used to simulate eight years (1998/99 to 2005/06) of snow accumulation and snow and glacier ice ablation from numerous small coastal marginal glaciers on the SW-part of Ammassalik Island in SE Greenland. These glaciers are regularly influenced by inversions and sea breezes associated with the adjacent relatively low temperature and frequently ice-choked fjords and ocean. To account for the influence of these inversions on the spatiotemporal variation of airmore » temperature and snow and glacier melt rates, temperature inversion routines were added to MircoMet, the meteorological distribution sub-model used in SnowModel. The inversions were observed and modeled to occur during 84% of the simulation period. Modeled inversions were defined not to occur during days with strong winds and high precipitation rates due to the potential of inversion break-up. Field observations showed inversions to extend from sea level to approximately 300 m a.s.l., and this inversion level was prescribed in the model simulations. Simulations with and without the inversion routines were compared. The inversion model produced air temperature distributions with warmer lower elevation areas and cooler higher elevation areas than without inversion routines due to the use of cold sea-breeze base temperature data from underneath the inversion. This yielded an up to 2 weeks earlier snowmelt in the lower areas and up to 1 to 3 weeks later snowmelt in the higher elevation areas of the simulation domain. Averaged mean annual modeled surface mass-balance for all glaciers (mainly located above the inversion layer) was -720 {+-} 620 mm w.eq. y{sup -1} for inversion simulations, and -880 {+-} 620 mm w.eq. y{sup -1} without the inversion routines, a difference of 160 mm w.eq. y{sup -1}. The annual glacier loss for the two simulations was 50.7 x 10{sup 6} m{sup 3} y{sup -1} and 64.4 x 10{sup 6} m{sup 3} y{sup -1} for all glaciers - a difference of {approx}21%. The average equilibrium line altitude (ELA) for all glaciers in the simulation domain was located at 875 m a.s.l. and at 900 m a.s.l. for simulations with or without inversion routines, respectively.« less

  5. A Hough Transform Global Probabilistic Approach to Multiple-Subject Diffusion MRI Tractography

    DTIC Science & Technology

    2010-04-01

    distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT A global probabilistic fiber tracking approach based on the voting process provided by...umn.edu 2 ABSTRACT A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in...criteria for aligning curves and particularly tracts. In this work, we present a global probabilistic approach inspired by the voting procedure provided

  6. Modality, probability, and mental models.

    PubMed

    Hinterecker, Thomas; Knauff, Markus; Johnson-Laird, P N

    2016-10-01

    We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both . Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning. PsycINFO Database Record (c) 2016 APA, all rights reserved

  7. Word-level language modeling for P300 spellers based on discriminative graphical models

    NASA Astrophysics Data System (ADS)

    Delgado Saa, Jaime F.; de Pesters, Adriana; McFarland, Dennis; Çetin, Müjdat

    2015-04-01

    Objective. In this work we propose a probabilistic graphical model framework that uses language priors at the level of words as a mechanism to increase the performance of P300-based spellers. Approach. This paper is concerned with brain-computer interfaces based on P300 spellers. Motivated by P300 spelling scenarios involving communication based on a limited vocabulary, we propose a probabilistic graphical model framework and an associated classification algorithm that uses learned statistical models of language at the level of words. Exploiting such high-level contextual information helps reduce the error rate of the speller. Main results. Our experimental results demonstrate that the proposed approach offers several advantages over existing methods. Most importantly, it increases the classification accuracy while reducing the number of times the letters need to be flashed, increasing the communication rate of the system. Significance. The proposed approach models all the variables in the P300 speller in a unified framework and has the capability to correct errors in previous letters in a word, given the data for the current one. The structure of the model we propose allows the use of efficient inference algorithms, which in turn makes it possible to use this approach in real-time applications.

  8. Modeling the Effects of Meteorological Conditions on the Neutron Flux

    DTIC Science & Technology

    2017-05-22

    a statistical model that predicts environmental neutron background as a function of five meteorological variables: inverse barometric pressure...variable of the model was inverse barometric pressure with a contribution an order of magnitude larger than any other variable’s contribution. The...is based on the sensitivity of each sensor. . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 Neutron counts from the LNS and inverse pressure

  9. Decision making generalized by a cumulative probability weighting function

    NASA Astrophysics Data System (ADS)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  10. The 3D Recognition, Generation, Fusion, Update and Refinement (RG4) Concept

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Cheeseman, Peter; Smelyanskyi, Vadim N.; Kuehnel, Frank; Morris, Robin D.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an active (real time) recognition strategy whereby information is inferred iteratively across several viewpoints in descent imagery. We will show how we use inverse theory within the context of parametric model generation, namely height and spectral reflection functions, to generate model assertions. Using this strategy in an active context implies that, from every viewpoint, the proposed system must refine its hypotheses taking into account the image and the effect of uncertainties as well. The proposed system employs probabilistic solutions to the problem of iteratively merging information (images) from several viewpoints. This involves feeding the posterior distribution from all previous images as a prior for the next view. Novel approaches will be developed to accelerate the inversion search using novel statistic implementations and reducing the model complexity using foveated vision. Foveated vision refers to imagery where the resolution varies across the image. In this paper, we allow the model to be foveated where the highest resolution region is called the foveation region. Typically, the images will have dynamic control of the location of the foveation region. For descent imagery in the Entry, Descent, and Landing (EDL) process, it is possible to have more than one foveation region. This research initiative is directed towards descent imagery in connection with NASA's EDL applications. Three-Dimensional Model Recognition, Generation, Fusion, Update, and Refinement (RGFUR or RG4) for height and the spectral reflection characteristics are in focus for various reasons, one of which is the prospect that their interpretation will provide for real time active vision for automated EDL.

  11. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  12. A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, T.; Luo, H.

    2013-12-01

    On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.

  13. The composite load spectra project

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H.; Kurth, R. E.

    1990-01-01

    Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.

  14. Meta-Analysis of Lead (Pb) in Multiple Environmental Media in the United States

    EPA Science Inventory

    Introduction: The U.S. Environmental Protection Agency, Office of Research and Development, conducts probabilistic multimedia lead (Pb) exposure modeling to inform the development of health-based benchmarks for Pb in the environment. For this modeling, robust Pb concentration dat...

  15. MODELING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN

    EPA Science Inventory

    To help address the aggregate exposure assessment needs of the Food Quality Protection Act, a physically-based probabilistic model (SHEDS-Pesticides, version 3) has been applied to estimate aggregate chlorpyrifos exposure and dose to children. Two age groups (0-4, 5-9 years) a...

  16. Thermal conductivity of heterogeneous mixtures and lunar soils

    NASA Technical Reports Server (NTRS)

    Vachon, R. I.; Prakouras, A. G.; Crane, R.; Khader, M. S.

    1973-01-01

    The theoretical evaluation of the effective thermal conductivity of granular materials is discussed with emphasis upon the heat transport properties of lunar soil. The following types of models are compared: probabilistic, parallel isotherm, stochastic, lunar, and a model based on nonlinear heat flow system synthesis.

  17. A probabilistic method for streamflow projection and associated uncertainty analysis in a data sparse alpine region

    NASA Astrophysics Data System (ADS)

    Ren, Weiwei; Yang, Tao; Shi, Pengfei; Xu, Chong-yu; Zhang, Ke; Zhou, Xudong; Shao, Quanxi; Ciais, Philippe

    2018-06-01

    Climate change imposes profound influence on regional hydrological cycle and water security in many alpine regions worldwide. Investigating regional climate impacts using watershed scale hydrological models requires a large number of input data such as topography, meteorological and hydrological data. However, data scarcity in alpine regions seriously restricts evaluation of climate change impacts on water cycle using conventional approaches based on global or regional climate models, statistical downscaling methods and hydrological models. Therefore, this study is dedicated to development of a probabilistic model to replace the conventional approaches for streamflow projection. The probabilistic model was built upon an advanced Bayesian Neural Network (BNN) approach directly fed by the large-scale climate predictor variables and tested in a typical data sparse alpine region, the Kaidu River basin in Central Asia. Results show that BNN model performs better than the general methods across a number of statistical measures. The BNN method with flexible model structures by active indicator functions, which reduce the dependence on the initial specification for the input variables and the number of hidden units, can work well in a data limited region. Moreover, it can provide more reliable streamflow projections with a robust generalization ability. Forced by the latest bias-corrected GCM scenarios, streamflow projections for the 21st century under three RCP emission pathways were constructed and analyzed. Briefly, the proposed probabilistic projection approach could improve runoff predictive ability over conventional methods and provide better support to water resources planning and management under data limited conditions as well as enable a facilitated climate change impact analysis on runoff and water resources in alpine regions worldwide.

  18. Development of a Probabilistic Decision-Support Model to Forecast Coastal Resilience

    NASA Astrophysics Data System (ADS)

    Wilson, K.; Safak, I.; Brenner, O.; Lentz, E. E.; Hapke, C. J.

    2016-02-01

    Site-specific forecasts of coastal change are a valuable management tool in preparing for and assessing storm-driven impacts in coastal areas. More specifically, understanding the likelihood of storm impacts, recovery following events, and the alongshore variability of both is central in evaluating vulnerability and resiliency of barrier islands. We introduce a probabilistic modeling framework that integrates hydrodynamic, anthropogenic, and morphologic components of the barrier system to evaluate coastal change at Fire Island, New York. The model is structured on a Bayesian network (BN), which utilizes observations to learn statistical relationships between system variables. In addition to predictive ability, probabilistic models convey the level of confidence associated with a prediction, an important consideration for coastal managers. Our model predicts the likelihood of morphologic change on the upper beach based on several decades of beach monitoring data. A coupled hydrodynamic BN combines probabilistic and deterministic modeling approaches; by querying nearly two decades of nested-grid wave simulations that account for both distant swells and local seas, we produce scenarios of event and seasonal wave climates. The wave scenarios of total water level - a sum of run up, surge and tide - and anthropogenic modification are the primary drivers of morphologic change in our model structure. Preliminary results show the hydrodynamic BN is able to reproduce time series of total water levels, a critical validation process before generating scenarios, and forecasts of geomorphic change over three month intervals are up to 70% accurate. Predictions of storm-induced change and recovery are linked to evaluate zones of persistent vulnerability or resilience and will help managers target restoration efforts, identify areas most vulnerable to habitat degradation, and highlight resilient zones that may best support relocation of critical infrastructure.

  19. A minimalist probabilistic description of root zone soil water

    USGS Publications Warehouse

    Milly, P.C.D.

    2001-01-01

    The probabilistic response of depth‐integrated soil water to given climatic forcing can be described readily using an existing supply‐demand‐storage model. An apparently complex interaction of numerous soil, climate, and plant controls can be reduced to a relatively simple expression for the equilibrium probability density function of soil water as a function of only two dimensionless parameters. These are the index of dryness (ratio of mean potential evaporation to mean precipitation) and a dimensionless storage capacity (active root zone soil water capacity divided by mean storm depth). The first parameter is mainly controlled by climate, with surface albedo playing a subsidiary role in determining net radiation. The second is a composite of soil (through moisture retention characteristics), vegetation (through rooting characteristics), and climate (mean storm depth). This minimalist analysis captures many essential features of a more general probabilistic analysis, but with a considerable reduction in complexity and consequent elucidation of the critical controls on soil water variability. In particular, it is shown that (1) the dependence of mean soil water on the index of dryness approaches a step function in the limit of large soil water capacity; (2) soil water variance is usually maximized when the index of dryness equals 1, and the width of the peak varies inversely with dimensionless storage capacity; (3) soil water has a uniform probability density function when the index of dryness is 1 and the dimensionless storage capacity is large; and (4) the soil water probability density function is bimodal if and only if the index of dryness is <1, but this bimodality is pronounced only for artificially small values of the dimensionless storage capacity.

  20. BN-FLEMOps pluvial - A probabilistic multi-variable loss estimation model for pluvial floods

    NASA Astrophysics Data System (ADS)

    Roezer, V.; Kreibich, H.; Schroeter, K.; Doss-Gollin, J.; Lall, U.; Merz, B.

    2017-12-01

    Pluvial flood events, such as in Copenhagen (Denmark) in 2011, Beijing (China) in 2012 or Houston (USA) in 2016, have caused severe losses to urban dwellings in recent years. These floods are caused by storm events with high rainfall rates well above the design levels of urban drainage systems, which lead to inundation of streets and buildings. A projected increase in frequency and intensity of heavy rainfall events in many areas and an ongoing urbanization may increase pluvial flood losses in the future. For an efficient risk assessment and adaptation to pluvial floods, a quantification of the flood risk is needed. Few loss models have been developed particularly for pluvial floods. These models usually use simple waterlevel- or rainfall-loss functions and come with very high uncertainties. To account for these uncertainties and improve the loss estimation, we present a probabilistic multi-variable loss estimation model for pluvial floods based on empirical data. The model was developed in a two-step process using a machine learning approach and a comprehensive database comprising 783 records of direct building and content damage of private households. The data was gathered through surveys after four different pluvial flood events in Germany between 2005 and 2014. In a first step, linear and non-linear machine learning algorithms, such as tree-based and penalized regression models were used to identify the most important loss influencing factors among a set of 55 candidate variables. These variables comprise hydrological and hydraulic aspects, early warning, precaution, building characteristics and the socio-economic status of the household. In a second step, the most important loss influencing variables were used to derive a probabilistic multi-variable pluvial flood loss estimation model based on Bayesian Networks. Two different networks were tested: a score-based network learned from the data and a network based on expert knowledge. Loss predictions are made through Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. With the ability to cope with incomplete information and use expert knowledge, as well as inherently providing quantitative uncertainty information, it is shown that loss models based on BNs are superior to deterministic approaches for pluvial flood risk assessment.

Top