Science.gov

Sample records for probabilistic choice models

  1. A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice

    PubMed Central

    Dai, Junyi; Busemeyer, Jerome R.

    2014-01-01

    Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188

  2. The Probabilistic Nature of Preferential Choice

    ERIC Educational Resources Information Center

    Rieskamp, Jorg

    2008-01-01

    Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes…

  3. Probabilistic microcell prediction model

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2002-06-01

    A microcell is a cell with 1-km or less radius which is suitable for heavily urbanized area such as a metropolitan city. This paper deals with the microcell prediction model of propagation loss which uses probabilistic techniques. The RSL (Receive Signal Level) is the factor which can evaluate the performance of a microcell and the LOS (Line-Of-Sight) component and the blockage loss directly effect on the RSL. We are combining the probabilistic method to get these performance factors. The mathematical methods include the CLT (Central Limit Theorem) and the SPC (Statistical Process Control) to get the parameters of the distribution. This probabilistic solution gives us better measuring of performance factors. In addition, it gives the probabilistic optimization of strategies such as the number of cells, cell location, capacity of cells, range of cells and so on. Specially, the probabilistic optimization techniques by itself can be applied to real-world problems such as computer-networking, human resources and manufacturing process.

  4. Learning probabilistic document template models via interaction

    NASA Astrophysics Data System (ADS)

    Ahmadullin, Ildus; Damera-Venkata, Niranjan

    2013-03-01

    Document aesthetics measures are key to automated document composition. Recently we presented a probabilistic document model (PDM) which is a micro-model for document aesthetics based on a probabilistic modeling of designer choice in document design. The PDM model comes with efficient layout synthesis algorithms once the aesthetic model is defined. A key element of this approach is an aesthetic prior on the parameters of a template encoding aesthetic preferences for template parameters. Parameters of the prior were required to be chosen empirically by designers. In this work we show how probabilistic template models (and hence the PDM cost function) can be learnt directly by observing a designer making design choices in composing sample documents. From such training data our learning approach can learn a quality measure that can mimic some of the design tradeoffs a designer makes in practice.

  5. Probabilistic Mesomechanical Fatigue Model

    NASA Technical Reports Server (NTRS)

    Tryon, Robert G.

    1997-01-01

    A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.

  6. Firm competition in a probabilistic framework of consumer choice

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Xiao, Rui; Chen, Duanbing; Medo, Matúš; Zhang, Yi-Cheng

    2014-04-01

    We develop a probabilistic consumer choice framework based on information asymmetry between consumers and firms. This framework makes it possible to study market competition of several firms by both quality and price of their products. We find Nash market equilibria and other optimal strategies in various situations ranging from competition of two identical firms to firms of different sizes and firms which improve their efficiency.

  7. A Discounting Framework for Choice With Delayed and Probabilistic Rewards

    PubMed Central

    Green, Leonard; Myerson, Joel

    2005-01-01

    When choosing between delayed or uncertain outcomes, individuals discount the value of such outcomes on the basis of the expected time to or the likelihood of their occurrence. In an integrative review of the expanding experimental literature on discounting, the authors show that although the same form of hyperbola-like function describes discounting of both delayed and probabilistic outcomes, a variety of recent findings are inconsistent with a single-process account. The authors also review studies that compare discounting in different populations and discuss the theoretical and practical implications of the findings. The present effort illustrates the value of studying choice involving both delayed and probabilistic outcomes within a general discounting framework that uses similar experimental procedures and a common analytical approach. PMID:15367080

  8. Effects of Time between Trials on Rats' and Pigeons' Choices with Probabilistic Delayed Reinforcers

    PubMed Central

    Mazur, James E; Biondi, Dawn R

    2011-01-01

    Parallel experiments with rats and pigeons examined reasons for previous findings that in choices with probabilistic delayed reinforcers, rats' choices were affected by the time between trials whereas pigeons' choices were not. In both experiments, the animals chose between a standard alternative and an adjusting alternative. A choice of the standard alternative led to a short delay (1 s or 3 s), and then food might or might not be delivered. If food was not delivered, there was an “interlink interval,” and then the animal was forced to continue to select the standard alternative until food was delivered. A choice of the adjusting alternative always led to food after a delay that was systematically increased and decreased over trials to estimate an indifference point—a delay at which the two alternatives were chosen about equally often. Under these conditions, the indifference points for both rats and pigeons increased as the interlink interval increased from 0 s to 20 s, indicating decreased preference for the probabilistic reinforcer with longer time between trials. The indifference points from both rats and pigeons were well described by the hyperbolic-decay model. In the last phase of each experiment, the animals were not forced to continue selecting the standard alternative if food was not delivered. Under these conditions, rats' choices were affected by the time between trials whereas pigeons' choices were not, replicating results of previous studies. The differences between the behavior of rats and pigeons appears to be the result of procedural details, not a fundamental difference in how these two species make choices with probabilistic delayed reinforcers. PMID:21541170

  9. Effects of time between trials on rats' and pigeons' choices with probabilistic delayed reinforcers.

    PubMed

    Mazur, James E; Biondi, Dawn R

    2011-01-01

    Parallel experiments with rats and pigeons examined reasons for previous findings that in choices with probabilistic delayed reinforcers, rats' choices were affected by the time between trials whereas pigeons' choices were not. In both experiments, the animals chose between a standard alternative and an adjusting alternative. A choice of the standard alternative led to a short delay (1 s or 3 s), and then food might or might not be delivered. If food was not delivered, there was an "interlink interval," and then the animal was forced to continue to select the standard alternative until food was delivered. A choice of the adjusting alternative always led to food after a delay that was systematically increased and decreased over trials to estimate an indifference point--a delay at which the two alternatives were chosen about equally often. Under these conditions, the indifference points for both rats and pigeons increased as the interlink interval increased from 0 s to 20 s, indicating decreased preference for the probabilistic reinforcer with longer time between trials. The indifference points from both rats and pigeons were well described by the hyperbolic-decay model. In the last phase of each experiment, the animals were not forced to continue selecting the standard alternative if food was not delivered. Under these conditions, rats' choices were affected by the time between trials whereas pigeons' choices were not, replicating results of previous studies. The differences between the behavior of rats and pigeons appears to be the result of procedural details, not a fundamental difference in how these two species make choices with probabilistic delayed reinforcers. PMID:21541170

  10. Probabilistic choice between symmetric disparities in motion stereo matching for a lateral navigation system

    NASA Astrophysics Data System (ADS)

    Ershov, Egor; Karnaukhov, Victor; Mozerov, Mikhail

    2016-02-01

    Two consecutive frames of a lateral navigation camera video sequence can be considered as an appropriate approximation to epipolar stereo. To overcome edge-aware inaccuracy caused by occlusion, we propose a model that matches the current frame to the next and to the previous ones. The positive disparity of matching to the previous frame has its symmetric negative disparity to the next frame. The proposed algorithm performs probabilistic choice for each matched pixel between the positive disparity and its symmetric disparity cost. A disparity map obtained by optimization over the cost volume composed of the proposed probabilistic choice is more accurate than the traditional left-to-right and right-to-left disparity maps cross-check. Also, our algorithm needs two times less computational operations per pixel than the cross-check technique. The effectiveness of our approach is demonstrated on synthetic data and real video sequences, with ground-truth value.

  11. Effects of Time between Trials on Rats' and Pigeons' Choices with Probabilistic Delayed Reinforcers

    ERIC Educational Resources Information Center

    Mazur, James E.; Biondi, Dawn R.

    2011-01-01

    Parallel experiments with rats and pigeons examined reasons for previous findings that in choices with probabilistic delayed reinforcers, rats' choices were affected by the time between trials whereas pigeons' choices were not. In both experiments, the animals chose between a standard alternative and an adjusting alternative. A choice of the…

  12. Relative gains, losses, and reference points in probabilistic choice in rats.

    PubMed

    Marshall, Andrew T; Kirkpatrick, Kimberly

    2015-01-01

    Theoretical reference points have been proposed to differentiate probabilistic gains from probabilistic losses in humans, but such a phenomenon in non-human animals has yet to be thoroughly elucidated. Three experiments evaluated the effect of reward magnitude on probabilistic choice in rats, seeking to determine reference point use by examining the effect of previous outcome magnitude(s) on subsequent choice behavior. Rats were trained to choose between an outcome that always delivered reward (low-uncertainty choice) and one that probabilistically delivered reward (high-uncertainty). The probability of high-uncertainty outcome receipt and the magnitudes of low-uncertainty and high-uncertainty outcomes were manipulated within and between experiments. Both the low- and high-uncertainty outcomes involved variable reward magnitudes, so that either a smaller or larger magnitude was probabilistically delivered, as well as reward omission following high-uncertainty choices. In Experiments 1 and 2, the between groups factor was the magnitude of the high-uncertainty-smaller (H-S) and high-uncertainty-larger (H-L) outcome, respectively. The H-S magnitude manipulation differentiated the groups, while the H-L magnitude manipulation did not. Experiment 3 showed that manipulating the probability of differential losses as well as the expected value of the low-uncertainty choice produced systematic effects on choice behavior. The results suggest that the reference point for probabilistic gains and losses was the expected value of the low-uncertainty choice. Current theories of probabilistic choice behavior have difficulty accounting for the present results, so an integrated theoretical framework is proposed. Overall, the present results have implications for understanding individual differences and corresponding underlying mechanisms of probabilistic choice behavior. PMID:25658448

  13. Relative Gains, Losses, and Reference Points in Probabilistic Choice in Rats

    PubMed Central

    Marshall, Andrew T.; Kirkpatrick, Kimberly

    2015-01-01

    Theoretical reference points have been proposed to differentiate probabilistic gains from probabilistic losses in humans, but such a phenomenon in non-human animals has yet to be thoroughly elucidated. Three experiments evaluated the effect of reward magnitude on probabilistic choice in rats, seeking to determine reference point use by examining the effect of previous outcome magnitude(s) on subsequent choice behavior. Rats were trained to choose between an outcome that always delivered reward (low-uncertainty choice) and one that probabilistically delivered reward (high-uncertainty). The probability of high-uncertainty outcome receipt and the magnitudes of low-uncertainty and high-uncertainty outcomes were manipulated within and between experiments. Both the low- and high-uncertainty outcomes involved variable reward magnitudes, so that either a smaller or larger magnitude was probabilistically delivered, as well as reward omission following high-uncertainty choices. In Experiments 1 and 2, the between groups factor was the magnitude of the high-uncertainty-smaller (H-S) and high-uncertainty-larger (H-L) outcome, respectively. The H-S magnitude manipulation differentiated the groups, while the H-L magnitude manipulation did not. Experiment 3 showed that manipulating the probability of differential losses as well as the expected value of the low-uncertainty choice produced systematic effects on choice behavior. The results suggest that the reference point for probabilistic gains and losses was the expected value of the low-uncertainty choice. Current theories of probabilistic choice behavior have difficulty accounting for the present results, so an integrated theoretical framework is proposed. Overall, the present results have implications for understanding individual differences and corresponding underlying mechanisms of probabilistic choice behavior. PMID:25658448

  14. Probabilistic Modeling of Rosette Formation

    PubMed Central

    Long, Mian; Chen, Juan; Jiang, Ning; Selvaraj, Periasamy; McEver, Rodger P.; Zhu, Cheng

    2006-01-01

    Rosetting, or forming a cell aggregate between a single target nucleated cell and a number of red blood cells (RBCs), is a simple assay for cell adhesion mediated by specific receptor-ligand interaction. For example, rosette formation between sheep RBC and human lymphocytes has been used to differentiate T cells from B cells. Rosetting assay is commonly used to determine the interaction of Fc γ-receptors (FcγR) expressed on inflammatory cells and IgG coated on RBCs. Despite its wide use in measuring cell adhesion, the biophysical parameters of rosette formation have not been well characterized. Here we developed a probabilistic model to describe the distribution of rosette sizes, which is Poissonian. The average rosette size is predicted to be proportional to the apparent two-dimensional binding affinity of the interacting receptor-ligand pair and their site densities. The model has been supported by experiments of rosettes mediated by four molecular interactions: FcγRIII interacting with IgG, T cell receptor and coreceptor CD8 interacting with antigen peptide presented by major histocompatibility molecule, P-selectin interacting with P-selectin glycoprotein ligand 1 (PSGL-1), and L-selectin interacting with PSGL-1. The latter two are structurally similar and are different from the former two. Fitting the model to data enabled us to evaluate the apparent effective two-dimensional binding affinity of the interacting molecular pairs: 7.19 × 10−5 μm4 for FcγRIII-IgG interaction, 4.66 × 10−3 μm4 for P-selectin-PSGL-1 interaction, and 0.94 × 10−3 μm4 for L-selectin-PSGL-1 interaction. These results elucidate the biophysical mechanism of rosette formation and enable it to become a semiquantitative assay that relates the rosette size to the effective affinity for receptor-ligand binding. PMID:16603493

  15. Probabilistic Survivability Versus Time Modeling

    NASA Technical Reports Server (NTRS)

    Joyner, James J., Sr.

    2015-01-01

    This technical paper documents Kennedy Space Centers Independent Assessment team work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer (CSO) and GSDO management during key programmatic reviews. The assessments provided the GSDO Program with an analysis of how egress time affects the likelihood of astronaut and worker survival during an emergency. For each assessment, the team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedys Vehicle Assembly Building (VAB).Based on the composite survivability versus time graphs from the first two assessments, there was a soft knee in the Figure of Merit graphs at eight minutes (ten minutes after egress ordered). Thus, the graphs illustrated to the decision makers that the final emergency egress design selected should have the capability of transporting the flight crew from the top of LC 39B to a safe location in eight minutes or less. Results for the third assessment were dominated by hazards that were classified as instantaneous in nature (e.g. stacking mishaps) and therefore had no effect on survivability vs time to egress the VAB. VAB emergency scenarios that degraded over time (e.g. fire) produced survivability vs time graphs that were line with aerospace industry norms.

  16. Choice Behavior in Pigeons Maintained with Probabilistic Schedules of Reinforcement

    ERIC Educational Resources Information Center

    Moore, Jay; Friedlen, Karen E.

    2007-01-01

    Pigeons were trained in three experiments with a two-key, concurrent-chains choice procedure. The initial links were equal variable-interval schedules, and the terminal links were random-time schedules with equal average interreinforcement intervals. Across the three experiments, the pigeons either stayed in a terminal link until a reinforcer was…

  17. Modelling structured data with Probabilistic Graphical Models

    NASA Astrophysics Data System (ADS)

    Forbes, F.

    2016-05-01

    Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.

  18. The Repeated Insertion Model for Rankings: Missing Link between Two Subset Choice Models

    ERIC Educational Resources Information Center

    Doignon, Jean-Paul; Pekec, Aleksandar; Regenwetter, Michel

    2004-01-01

    Several probabilistic models for subset choice have been proposed in the literature, for example, to explain approval voting data. We show that Marley et al.'s latent scale model is subsumed by Falmagne and Regenwetter's size-independent model, in the sense that every choice probability distribution generated by the former can also be explained by…

  19. A Probabilistic Model of Melody Perception

    ERIC Educational Resources Information Center

    Temperley, David

    2008-01-01

    This study presents a probabilistic model of melody perception, which infers the key of a melody and also judges the probability of the melody itself. The model uses Bayesian reasoning: For any "surface" pattern and underlying "structure," we can infer the structure maximizing P(structure [vertical bar] surface) based on knowledge of P(surface,…

  20. Probabilistic Seismic Risk Model for Western Balkans

    NASA Astrophysics Data System (ADS)

    Stejskal, Vladimir; Lorenzo, Francisco; Pousse, Guillaume; Radovanovic, Slavica; Pekevski, Lazo; Dojcinovski, Dragi; Lokin, Petar; Petronijevic, Mira; Sipka, Vesna

    2010-05-01

    A probabilistic seismic risk model for insurance and reinsurance purposes is presented for an area of Western Balkans, covering former Yugoslavia and Albania. This territory experienced many severe earthquakes during past centuries producing significant damage to many population centres in the region. The highest hazard is related to external Dinarides, namely to the collision zone of the Adriatic plate. The model is based on a unified catalogue for the region and a seismic source model consisting of more than 30 zones covering all the three main structural units - Southern Alps, Dinarides and the south-western margin of the Pannonian Basin. A probabilistic methodology using Monte Carlo simulation was applied to generate the hazard component of the model. Unique set of damage functions based on both loss experience and engineering assessments is used to convert the modelled ground motion severity into the monetary loss.

  1. Probabilistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  2. Probabilistic drought classification using gamma mixture models

    NASA Astrophysics Data System (ADS)

    Mallya, Ganeshchandra; Tripathi, Shivam; Govindaraju, Rao S.

    2015-07-01

    Drought severity is commonly reported using drought classes obtained by assigning pre-defined thresholds on drought indices. Current drought classification methods ignore modeling uncertainties and provide discrete drought classification. However, the users of drought classification are often interested in knowing inherent uncertainties in classification so that they can make informed decisions. Recent studies have used hidden Markov models (HMM) for quantifying uncertainties in drought classification. The HMM method conceptualizes drought classes as distinct hydrological states that are not observed (hidden) but affect observed hydrological variables. The number of drought classes or hidden states in the model is pre-specified, which can sometimes result in model over-specification problem. This study proposes an alternate method for probabilistic drought classification where the number of states in the model is determined by the data. The proposed method adapts Standard Precipitation Index (SPI) methodology of drought classification by employing gamma mixture model (Gamma-MM) in a Bayesian framework. The method alleviates the problem of choosing a suitable distribution for fitting data in SPI analysis, quantifies modeling uncertainties, and propagates them for probabilistic drought classification. The method is tested on rainfall data over India. Comparison of the results with standard SPI show important differences particularly when SPI assumptions on data distribution are violated. Further, the new method is simpler and more parsimonious than HMM based drought classification method and can be a viable alternative for probabilistic drought classification.

  3. Transitions in a probabilistic interface growth model

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Moreira, J. G.

    2011-04-01

    We study a generalization of the Wolf-Villain (WV) interface growth model based on a probabilistic growth rule. In the WV model, particles are randomly deposited onto a substrate and subsequently move to a position nearby where the binding is strongest. We introduce a growth probability which is proportional to a power of the number ni of bindings of the site i: p_i\\propto n_i^\

  4. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  5. The probabilistic structure of planetary contamination models

    NASA Technical Reports Server (NTRS)

    Harrison, J. M.; North, W. D.

    1973-01-01

    The analytical basis for planetary quarantine standards and procedures is presented. The heirarchy of planetary quarantine decisions is explained and emphasis is placed on the determination of mission specifications to include sterilization. The influence of the Sagan-Coleman probabilistic model of planetary contamination on current standards and procedures is analyzed. A classical problem in probability theory which provides a close conceptual parallel to the type of dependence present in the contamination problem is presented.

  6. Analytic gain in probabilistic decompression sickness models.

    PubMed

    Howle, Laurens E

    2013-11-01

    Decompression sickness (DCS) is a disease known to be related to inert gas bubble formation originating from gases dissolved in body tissues. Probabilistic DCS models, which employ survival and hazard functions, are optimized by fitting model parameters to experimental dive data. In the work reported here, I develop methods to find the survival function gain parameter analytically, thus removing it from the fitting process. I show that the number of iterations required for model optimization is significantly reduced. The analytic gain method substantially improves the condition number of the Hessian matrix which reduces the model confidence intervals by more than an order of magnitude. PMID:24209920

  7. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  8. Probabilistic Solar Energetic Particle Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.

    2011-01-01

    To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.

  9. A General Model for Preferential and Triadic Choice in Terms of Central F Distribution Functions.

    ERIC Educational Resources Information Center

    Ennis, Daniel M; Johnson, Norman L.

    1994-01-01

    A model for preferential and triadic choice is derived in terms of weighted sums of central F distribution functions. It is a probabilistic generalization of Coombs' (1964) unfolding model from which special cases can be derived easily. This model for binary choice can be easily related to preference ratio judgments. (SLD)

  10. Probabilistic models for feedback systems.

    SciTech Connect

    Grace, Matthew D.; Boggs, Paul T.

    2011-02-01

    In previous work, we developed a Bayesian-based methodology to analyze the reliability of hierarchical systems. The output of the procedure is a statistical distribution of the reliability, thus allowing many questions to be answered. The principal advantage of the approach is that along with an estimate of the reliability, we also can provide statements of confidence in the results. The model is quite general in that it allows general representations of all of the distributions involved, it incorporates prior knowledge into the models, it allows errors in the 'engineered' nodes of a system to be determined by the data, and leads to the ability to determine optimal testing strategies. In this report, we provide the preliminary steps necessary to extend this approach to systems with feedback. Feedback is an essential component of 'complexity' and provides interesting challenges in modeling the time-dependent action of a feedback loop. We provide a mechanism for doing this and analyze a simple case. We then consider some extensions to more interesting examples with local control affecting the entire system. Finally, a discussion of the status of the research is also included.

  11. Is probability matching smart? Associations between probabilistic choices and cognitive ability.

    PubMed

    Stanovich, Keith E

    2003-03-01

    In three experiments involving over 1,500 university students (n = 1,557) and two different probabilistic choice tasks, we found that the utility-maximizing strategy of choosing the most probable alternative was not the majority response. In a story problem version of a probabilistic choice task in which participants chose from among five different strategies,the maximizing response and the probability-matching response were each selected by a similar number of students (roughly 35% of the sample selected each). In a more continuous, or trial-by-trial, task, the utility-maximizing response was chosen by only one half as many students asthe probability-matching response. More important, in both versions of the task, the participants preferring the utility-maximizing response were significantly higher in cognitive ability than were the participants showing a probability-matching tendency. Critiques of the traditional interpretation of probability matching as nonoptimal may well help explain why some humans are drawn to the nonmaximizing behavior of probability matching, but the traditional heuristics and biases interpretation can most easily accommodate the finding that participants high in computational ability are more likely to carry out the rule-based cognitive procedures that lead to maximizing behavior. PMID:12749466

  12. Probabilistic computer model of optimal runway turnoffs

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.

    1985-01-01

    Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.

  13. Opportunities of probabilistic flood loss models

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno

    2016-04-01

    Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved

  14. Modeling Spanish Mood Choice in Belief Statements

    ERIC Educational Resources Information Center

    Robinson, Jason R.

    2013-01-01

    This work develops a computational methodology new to linguistics that empirically evaluates competing linguistic theories on Spanish verbal mood choice through the use of computational techniques to learn mood and other hidden linguistic features from Spanish belief statements found in corpora. The machine learned probabilistic linguistic models…

  15. A probabilistic graphical model based stochastic input model construction

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2014-09-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media.

  16. Probabilistic Resilience in Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Panerati, Jacopo; Beltrame, Giovanni; Schwind, Nicolas; Zeltner, Stefan; Inoue, Katsumi

    2016-05-01

    Originally defined in the context of ecological systems and environmental sciences, resilience has grown to be a property of major interest for the design and analysis of many other complex systems: resilient networks and robotics systems other the desirable capability of absorbing disruption and transforming in response to external shocks, while still providing the services they were designed for. Starting from an existing formalization of resilience for constraint-based systems, we develop a probabilistic framework based on hidden Markov models. In doing so, we introduce two new important features: stochastic evolution and partial observability. Using our framework, we formalize a methodology for the evaluation of probabilities associated with generic properties, we describe an efficient algorithm for the computation of its essential inference step, and show that its complexity is comparable to other state-of-the-art inference algorithms.

  17. Multivariate Probabilistic Analysis of an Hydrological Model

    NASA Astrophysics Data System (ADS)

    Franceschini, Samuela; Marani, Marco

    2010-05-01

    Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model

  18. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  19. scoringRules - A software package for probabilistic model evaluation

    NASA Astrophysics Data System (ADS)

    Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian

    2016-04-01

    Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.

  20. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  1. EFFECTS OF CORRELATED PROBABILISTIC EXPOSURE MODEL INPUTS ON SIMULATED RESULTS

    EPA Science Inventory

    In recent years, more probabilistic models have been developed to quantify aggregate human exposures to environmental pollutants. The impact of correlation among inputs in these models is an important issue, which has not been resolved. Obtaining correlated data and implementi...

  2. Approaches to implementing deterministic models in a probabilistic framework

    SciTech Connect

    Talbott, D.V.

    1995-04-01

    The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.

  3. Probabilistic model better defines development well risks

    SciTech Connect

    Connolly, M.R.

    1996-10-14

    Probabilistic techniques to compare and rank projects, such as the drilling of development wells, often are more representative than decision tree or deterministic approaches. As opposed to traditional deterministic methods, probabilistic analysis gives decision-makers ranges of outcomes with associated probabilities of occurrence. This article analyzes the drilling of a hypothetical development well with actual field data (such as stabilized initial rates, production declines, and gas/oil ratios) to calculate probabilistic reserves, and production flow streams. Analog operating data were included to build distributions for capital and operating costs. Economics from the Monte Carlo simulation include probabilistic production flow streams and cost distributions. Results include single parameter distributions (reserves, net present value, and profitability index) and time function distributions (annual production and net cash flow).

  4. A probabilistic model for binaural sound localization.

    PubMed

    Willert, Volker; Eggert, Julian; Adamy, Jürgen; Stahl, Raphael; Körner, Edgar

    2006-10-01

    This paper proposes a biologically inspired and technically implemented sound localization system to robustly estimate the position of a sound source in the frontal azimuthal half-plane. For localization, binaural cues are extracted using cochleagrams generated by a cochlear model that serve as input to the system. The basic idea of the model is to separately measure interaural time differences and interaural level differences for a number of frequencies and process these measurements as a whole. This leads to two-dimensional frequency versus time-delay representations of binaural cues, so-called activity maps. A probabilistic evaluation is presented to estimate the position of a sound source over time based on these activity maps. Learned reference maps for different azimuthal positions are integrated into the computation to gain time-dependent discrete conditional probabilities. At every timestep these probabilities are combined over frequencies and binaural cues to estimate the sound source position. In addition, they are propagated over time to improve position estimation. This leads to a system that is able to localize audible signals, for example human speech signals, even in reverberating environments. PMID:17036807

  5. Probabilistic constitutive relationships for cyclic material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, L.; Chamis, C. C.

    1988-01-01

    A methodology is developed that provides a probabilistic treatment for the lifetime of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs.

  6. Modeling one-choice and two-choice driving tasks

    PubMed Central

    Ratcliff, Roger

    2015-01-01

    An experiment is presented in which subjects were tested on both one-choice and two-choice driving tasks and on non-driving versions of them. Diffusion models for one- and two-choice tasks were successful in extracting model-based measures from the response time and accuracy data. These include measures of the quality of the information from the stimuli that drove the decision process (drift rate in the model), the time taken up by processes outside the decision process and, for the two-choice model, the speed/accuracy decision criteria that subjects set. Drift rates were only marginally different between the driving and non-driving tasks, indicating that nearly the same information was used in the two kinds of tasks. The tasks differed in the time taken up by other processes, reflecting the difference between them in response processing demands. Drift rates were significantly correlated across the two two-choice tasks showing that subjects that performed well on one task also performed well on the other task. Nondecision times were correlated across the two driving tasks, showing common abilities on motor processes across the two tasks. These results show the feasibility of using diffusion modeling to examine decision making in driving and so provide for a theoretical examination of factors that might impair driving, such as extreme aging, distraction, sleep deprivation, and so on. PMID:25944448

  7. Probabilistic Modeling of Imaging, Genetics and Diagnosis.

    PubMed

    Batmanghelich, Nematollah K; Dalca, Adrian; Quon, Gerald; Sabuncu, Mert; Golland, Polina

    2016-07-01

    We propose a unified Bayesian framework for detecting genetic variants associated with disease by exploiting image-based features as an intermediate phenotype. The use of imaging data for examining genetic associations promises new directions of analysis, but currently the most widely used methods make sub-optimal use of the richness that these data types can offer. Currently, image features are most commonly selected based on their relevance to the disease phenotype. Then, in a separate step, a set of genetic variants is identified to explain the selected features. In contrast, our method performs these tasks simultaneously in order to jointly exploit information in both data types. The analysis yields probabilistic measures of clinical relevance for both imaging and genetic markers. We derive an efficient approximate inference algorithm that handles the high dimensionality of image and genetic data. We evaluate the algorithm on synthetic data and demonstrate that it outperforms traditional models. We also illustrate our method on Alzheimer's Disease Neuroimaging Initiative data. PMID:26886973

  8. A PROBABILISTIC MODELING FRAMEWORK FOR PREDICTING POPULATION EXPOSURES TO BENZENE

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) is modifying their probabilistic Stochastic Human Exposure Dose Simulation (SHEDS) model to assess aggregate exposures to air toxics. Air toxics include urban Hazardous Air Pollutants (HAPS) such as benzene from mobile sources, part...

  9. Application of a stochastic snowmelt model for probabilistic decisionmaking

    NASA Technical Reports Server (NTRS)

    Mccuen, R. H.

    1983-01-01

    A stochastic form of the snowmelt runoff model that can be used for probabilistic decision-making was developed. The use of probabilistic streamflow predictions instead of single valued deterministic predictions leads to greater accuracy in decisions. While the accuracy of the output function is important in decisionmaking, it is also important to understand the relative importance of the coefficients. Therefore, a sensitivity analysis was made for each of the coefficients.

  10. A probabilistic model for snow avalanche occurrence

    NASA Astrophysics Data System (ADS)

    Perona, P.; Miescher, A.; Porporato, A.

    2009-04-01

    Avalanche hazard forecasting is an important issue in relation to the protection of urbanized environments, ski resorts and of ski-touring alpinists. A critical point is to predict the conditions that trigger the snow mass instability determining the onset and the size of avalanches. On steep terrains the risk of avalanches is known to be related to preceding consistent snowfall events and to subsequent changes in the local climatic conditions. Regression analysis has shown that avalanche occurrence indeed correlates to the amount of snow fallen in consecutive three snowing days and to the state of the settled snow at the ground. Moreover, since different type of avalanches may occur as a result of the interactions of different factors, the process of snow avalanche formation is inherently complex and with some degree of unpredictability. For this reason, although several models assess the risk of avalanche by accounting for all the involved processes with a great detail, a high margin of uncertainty invariably remains. In this work, we explicitly describe such an unpredictable behaviour with an intrinsic noise affecting the processes leading snow instability. Eventually, this sets the basis for a minimalist stochastic model, which allows us to investigate the avalanche dynamics and its statistical properties. We employ a continuous time process with stochastic jumps (snowfalls), deterministic decay (snowmelt and compaction) and state dependent avalanche occurrence (renewals) as a minimalist model for the determination of avalanche size and related intertime occurrence. The physics leading to avalanches is simplified to the extent where only meteorological data and terrain data are necessary to estimate avalanche danger. We explore the analytical formulation of the process and the properties of the probability density function of the avalanche process variables. We also discuss what is the probabilistic link between avalanche size and preceding snowfall event and

  11. Identification of thermal degradation using probabilistic models in reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Criner, A. K.; Cherry, A. J.; Cooney, A. T.; Katter, T. D.; Banks, H. T.; Hu, Shuhua; Catenacci, Jared

    2015-03-01

    Different probabilistic models of molecular vibration modes are considered to model the reflectance spectra of chemical species through the dielectric constant. We discuss probability measure estimators in parametric and nonparametric models. Analyses of ceramic matrix composite samples that have been heat treated for different amounts of times are compared. We finally compare these results with the analysis of vitreous silica using nonparametric models.

  12. Bayesian non-parametrics and the probabilistic approach to modelling

    PubMed Central

    Ghahramani, Zoubin

    2013-01-01

    Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609

  13. A Probabilistic Model of Phonological Relationships from Contrast to Allophony

    ERIC Educational Resources Information Center

    Hall, Kathleen Currie

    2009-01-01

    This dissertation proposes a model of phonological relationships, the Probabilistic Phonological Relationship Model (PPRM), that quantifies how predictably distributed two sounds in a relationship are. It builds on a core premise of traditional phonological analysis, that the ability to define phonological relationships such as contrast and…

  14. Exploring Term Dependences in Probabilistic Information Retrieval Model.

    ERIC Educational Resources Information Center

    Cho, Bong-Hyun; Lee, Changki; Lee, Gary Geunbae

    2003-01-01

    Describes a theoretic process to apply Bahadur-Lazarsfeld expansion (BLE) to general probabilistic models and the state-of-the-art 2-Poisson model. Through experiments on two standard document collections, one in Korean and one in English, it is demonstrated that incorporation of term dependences using BLE significantly contributes to performance…

  15. What do we gain with Probabilistic Flood Loss Models?

    NASA Astrophysics Data System (ADS)

    Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.

    2015-12-01

    The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  16. Modelling default and likelihood reasoning as probabilistic reasoning

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  17. Probabilistic Usage of the Multi-Factor Interaction Model

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A Multi-Factor Interaction Model (MFIM) is used to predict the insulating foam mass expulsion during the ascending of a space vehicle. The exponents in the MFIM are evaluated by an available approach which consists of least squares and an optimization algorithm. These results were subsequently used to probabilistically evaluate the effects of the uncertainties in each participating factor in the mass expulsion. The probabilistic results show that the surface temperature dominates at high probabilities and the pressure which causes the mass expulsion at low probabil

  18. A probabilistic model of semantic plausibility in sentence processing.

    PubMed

    Padó, Ulrike; Crocker, Matthew W; Keller, Frank

    2009-07-01

    Experimental research shows that human sentence processing uses information from different levels of linguistic analysis, for example, lexical and syntactic preferences as well as semantic plausibility. Existing computational models of human sentence processing, however, have focused primarily on lexico-syntactic factors. Those models that do account for semantic plausibility effects lack a general model of human plausibility intuitions at the sentence level. Within a probabilistic framework, we propose a wide-coverage model that both assigns thematic roles to verb-argument pairs and determines a preferred interpretation by evaluating the plausibility of the resulting (verb, role, argument) triples. The model is trained on a corpus of role-annotated language data. We also present a transparent integration of the semantic model with an incremental probabilistic parser. We demonstrate that both the semantic plausibility model and the combined syntax/semantics model predict judgment and reading time data from the experimental literature. PMID:21585487

  19. Multivariate probabilistic projections using imperfect climate models part I: outline of methodology

    NASA Astrophysics Data System (ADS)

    Sexton, David M. H.; Murphy, James M.; Collins, Mat; Webb, Mark J.

    2012-06-01

    We demonstrate a method for making probabilistic projections of climate change at global and regional scales, using examples consisting of the equilibrium response to doubled CO2 concentrations of global annual mean temperature and regional climate changes in summer and winter temperature and precipitation over Northern Europe and England-Wales. This method combines information from a perturbed physics ensemble, a set of international climate models, and observations. Our approach is based on a multivariate Bayesian framework which enables the prediction of a joint probability distribution for several variables constrained by more than one observational metric. This is important if different sets of impacts scientists are to use these probabilistic projections to make coherent forecasts for the impacts of climate change, by inputting several uncertain climate variables into their impacts models. Unlike a single metric, multiple metrics reduce the risk of rewarding a model variant which scores well due to a fortuitous compensation of errors rather than because it is providing a realistic simulation of the observed quantity. We provide some physical interpretation of how the key metrics constrain our probabilistic projections. The method also has a quantity, called discrepancy, which represents the degree of imperfection in the climate model i.e. it measures the extent to which missing processes, choices of parameterisation schemes and approximations in the climate model affect our ability to use outputs from climate models to make inferences about the real system. Other studies have, sometimes without realising it, treated the climate model as if it had no model error. We show that omission of discrepancy increases the risk of making over-confident predictions. Discrepancy also provides a transparent way of incorporating improvements in subsequent generations of climate models into probabilistic assessments. The set of international climate models is used to derive

  20. Probabilistic predictive modelling of carbon nanocomposites for medical implants design.

    PubMed

    Chua, Matthew; Chui, Chee-Kong

    2015-04-01

    Modelling of the mechanical properties of carbon nanocomposites based on input variables like percentage weight of Carbon Nanotubes (CNT) inclusions is important for the design of medical implants and other structural scaffolds. Current constitutive models for the mechanical properties of nanocomposites may not predict well due to differences in conditions, fabrication techniques and inconsistencies in reagents properties used across industries and laboratories. Furthermore, the mechanical properties of the designed products are not deterministic, but exist as a probabilistic range. A predictive model based on a modified probabilistic surface response algorithm is proposed in this paper to address this issue. Tensile testing of three groups of different CNT weight fractions of carbon nanocomposite samples displays scattered stress-strain curves, with the instantaneous stresses assumed to vary according to a normal distribution at a specific strain. From the probabilistic density function of the experimental data, a two factors Central Composite Design (CCD) experimental matrix based on strain and CNT weight fraction input with their corresponding stress distribution was established. Monte Carlo simulation was carried out on this design matrix to generate a predictive probabilistic polynomial equation. The equation and method was subsequently validated with more tensile experiments and Finite Element (FE) studies. The method was subsequently demonstrated in the design of an artificial tracheal implant. Our algorithm provides an effective way to accurately model the mechanical properties in implants of various compositions based on experimental data of samples. PMID:25658876

  1. ISSUES ASSOCIATED WITH PROBABILISTIC FAILURE MODELING OF DIGITAL SYSTEMS

    SciTech Connect

    CHU,T.L.; MARTINEZ-GURIDI,G.; LEHNER,J.; OVERLAND,D.

    2004-09-19

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process of instrumentation and control (I&C) systems is based on deterministic requirements, e.g., single failure criteria, and defense in depth and diversity. Probabilistic considerations can be used as supplements to the deterministic process. The National Research Council has recommended development of methods for estimating failure probabilities of digital systems, including commercial off-the-shelf (COTS) equipment, for use in probabilistic risk assessment (PRA). NRC staff has developed informal qualitative and quantitative requirements for PRA modeling of digital systems. Brookhaven National Laboratory (BNL) has performed a review of the-state-of-the-art of the methods and tools that can potentially be used to model digital systems. The objectives of this paper are to summarize the review, discuss the issues associated with probabilistic modeling of digital systems, and identify potential areas of research that would enhance the state of the art toward a satisfactory modeling method that could be integrated with a typical probabilistic risk assessment.

  2. Model choice for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Bàrdossy, Andràs

    2015-04-01

    Present and future water management decisions are often supported by modelling. The choice of the appropriate model and model parameters depend on the decision related question, the quality of the model and the available information. While spatially detailed physics based models might seem very transferable, the uncertainty of the parametrization and of the input may lead to highly diverging results, which are of no use for decision making. The optimal model choice requires a quantification of the input/natural parameter uncertainty. As a next step the influence of this uncertainty on predictions using models with different complexity has to be quantified. Finally the influence of this prediction uncertainty on the decisions to be taken has to be assessed. Different data/information availability and modelling questions thus might require different modelling approaches. A framework for this model choice and parametrization problem will be presented together with examples from regions with very different data availability and data quality.

  3. Probabilistic, Multidimensional Unfolding Analysis

    ERIC Educational Resources Information Center

    Zinnes, Joseph L.; Griggs, Richard A.

    1974-01-01

    Probabilistic assumptions are added to single and multidimensional versions of the Coombs unfolding model for preferential choice (Coombs, 1950) and practical ways of obtaining maximum likelihood estimates of the scale parameters and goodness-of-fit tests of the model are presented. A Monte Carlo experiment is discussed. (Author/RC)

  4. A probabilistic model of insolation for the Mojave Desert area

    NASA Technical Reports Server (NTRS)

    Hester, O. V.; Reid, M. S.

    1978-01-01

    A discussion of mathematical models of insolation characteristics suitable for use in analysis of solar energy systems is presented and shows why such models are essential for solar energy system design. A model of solar radiation for the Mojave Desert area is presented with probabilistic and deterministic components which reflect the occurrence and density of clouds and haze, and mimic their effects on both direct and indirect radiation. Multiple comparisons were made between measured total energy received per day and the corresponding simulated totals. The simulated totals were all within 11 percent of the measured total. The conclusion is that a useful probabilistic model of solar radiation for the Goldstone, California, area of the Mojave Desert has been constructed.

  5. Understanding Rasch Measurement: The Rasch Model, Additive Conjoint Measurement, and New Models of Probabilistic Measurement Theory.

    ERIC Educational Resources Information Center

    Karabatsos, George

    2001-01-01

    Describes similarities and differences between additive conjoint measurement and the Rasch model, and formalizes some new nonparametric item response models that are, in a sense, probabilistic measurement theory models. Applies these new models to published and simulated data. (SLD)

  6. Alternative fuels and vehicles choice model

    SciTech Connect

    Greene, D.L.

    1994-10-01

    This report describes the theory and implementation of a model of alternative fuel and vehicle choice (AFVC), designed for use with the US Department of Energy`s Alternative Fuels Trade Model (AFTM). The AFTM is a static equilibrium model of the world supply and demand for liquid fuels, encompassing resource production, conversion processes, transportation, and consumption. The AFTM also includes fuel-switching behavior by incorporating multinomial logit-type equations for choice of alternative fuel vehicles and alternative fuels. This allows the model to solve for market shares of vehicles and fuels, as well as for fuel prices and quantities. The AFVC model includes fuel-flexible, bi-fuel, and dedicated fuel vehicles. For multi-fuel vehicles, the choice of fuel is subsumed within the vehicle choice framework, resulting in a nested multinomial logit design. The nesting is shown to be required by the different price elasticities of fuel and vehicle choice. A unique feature of the AFVC is that its parameters are derived directly from the characteristics of alternative fuels and vehicle technologies, together with a few key assumptions about consumer behavior. This not only establishes a direct link between assumptions and model predictions, but facilitates sensitivity testing, as well. The implementation of the AFVC model as a spreadsheet is also described.

  7. Futuristic Models for Educational Choice

    ERIC Educational Resources Information Center

    Tanner, C. Kenneth

    1973-01-01

    Discusses eight computer assisted planning models that are amenable to the improvement of decisionmaking; explains the feasibility of involving systems analysis and operations research in educational decisions; and suggests a minimal program designed to prepare educational planners with knowledge of computer assisted planning models. (Author/JF)

  8. Consumer Vehicle Choice Model Documentation

    SciTech Connect

    Liu, Changzheng; Greene, David L

    2012-08-01

    In response to the Fuel Economy and Greenhouse Gas (GHG) emissions standards, automobile manufacturers will need to adopt new technologies to improve the fuel economy of their vehicles and to reduce the overall GHG emissions of their fleets. The U.S. Environmental Protection Agency (EPA) has developed the Optimization Model for reducing GHGs from Automobiles (OMEGA) to estimate the costs and benefits of meeting GHG emission standards through different technology packages. However, the model does not simulate the impact that increased technology costs will have on vehicle sales or on consumer surplus. As the model documentation states, “While OMEGA incorporates functions which generally minimize the cost of meeting a specified carbon dioxide (CO2) target, it is not an economic simulation model which adjusts vehicle sales in response to the cost of the technology added to each vehicle.” Changes in the mix of vehicles sold, caused by the costs and benefits of added fuel economy technologies, could make it easier or more difficult for manufacturers to meet fuel economy and emissions standards, and impacts on consumer surplus could raise the costs or augment the benefits of the standards. Because the OMEGA model does not presently estimate such impacts, the EPA is investigating the feasibility of developing an adjunct to the OMEGA model to make such estimates. This project is an effort to develop and test a candidate model. The project statement of work spells out the key functional requirements for the new model.

  9. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  10. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  11. Probabilistic graphical model representation in phylogenetics.

    PubMed

    Höhna, Sebastian; Heath, Tracy A; Boussau, Bastien; Landis, Michael J; Ronquist, Fredrik; Huelsenbeck, John P

    2014-09-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis-Hastings or Gibbs sampling of the posterior distribution. PMID:24951559

  12. Probabilistic grammatical model for helix‐helix contact site classification

    PubMed Central

    2013-01-01

    Background Hidden Markov Models power many state‐of‐the‐art tools in the field of protein bioinformatics. While excelling in their tasks, these methods of protein analysis do not convey directly information on medium‐ and long‐range residue‐residue interactions. This requires an expressive power of at least context‐free grammars. However, application of more powerful grammar formalisms to protein analysis has been surprisingly limited. Results In this work, we present a probabilistic grammatical framework for problem‐specific protein languages and apply it to classification of transmembrane helix‐helix pairs configurations. The core of the model consists of a probabilistic context‐free grammar, automatically inferred by a genetic algorithm from only a generic set of expert‐based rules and positive training samples. The model was applied to produce sequence based descriptors of four classes of transmembrane helix‐helix contact site configurations. The highest performance of the classifiers reached AUCROC of 0.70. The analysis of grammar parse trees revealed the ability of representing structural features of helix‐helix contact sites. Conclusions We demonstrated that our probabilistic context‐free framework for analysis of protein sequences outperforms the state of the art in the task of helix‐helix contact site classification. However, this is achieved without necessarily requiring modeling long range dependencies between interacting residues. A significant feature of our approach is that grammar rules and parse trees are human‐readable. Thus they could provide biologically meaningful information for molecular biologists. PMID:24350601

  13. Influential input classification in probabilistic multimedia models

    SciTech Connect

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.; Geng, Shu

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions one should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.

  14. Probabilistic graphic models applied to identification of diseases.

    PubMed

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555

  15. Probabilistic graphic models applied to identification of diseases

    PubMed Central

    Sato, Renato Cesar; Sato, Graziela Tiemy Kajita

    2015-01-01

    ABSTRACT Decision-making is fundamental when making diagnosis or choosing treatment. The broad dissemination of computed systems and databases allows systematization of part of decisions through artificial intelligence. In this text, we present basic use of probabilistic graphic models as tools to analyze causality in health conditions. This method has been used to make diagnosis of Alzheimer´s disease, sleep apnea and heart diseases. PMID:26154555

  16. Probabilistic Independence Networks for Hidden Markov Probability Models

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic; Heckerman, Cavid; Jordan, Michael I

    1996-01-01

    In this paper we explore hidden Markov models(HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general enference algorithms for arbitrary PINs.

  17. Economic Dispatch for Microgrid Containing Electric Vehicles via Probabilistic Modelling

    SciTech Connect

    Yao, Yin; Gao, Wenzhong; Momoh, James; Muljadi, Eduard

    2015-10-06

    In this paper, an economic dispatch model with probabilistic modeling is developed for microgrid. Electric power supply in microgrid consists of conventional power plants and renewable energy power plants, such as wind and solar power plants. Due to the fluctuation of solar and wind plants' output, an empirical probabilistic model is developed to predict their hourly output. According to different characteristics of wind and solar plants, the parameters for probabilistic distribution are further adjusted individually for both power plants. On the other hand, with the growing trend of Plug-in Electric Vehicle (PHEV), an integrated microgrid system must also consider the impact of PHEVs. Not only the charging loads from PHEVs, but also the discharging output via Vehicle to Grid (V2G) method can greatly affect the economic dispatch for all the micro energy sources in microgrid. This paper presents an optimization method for economic dispatch in microgrid considering conventional, renewable power plants, and PHEVs. The simulation results reveal that PHEVs with V2G capability can be an indispensable supplement in modern microgrid.

  18. Nonlinear probabilistic finite element models of laminated composite shells

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Reddy, J. N.

    1993-01-01

    A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.

  19. A probabilistic approach to aggregate induction machine modeling

    SciTech Connect

    Stankovic, A.M.; Lesieutre, B.C.

    1996-11-01

    In this paper the authors pursue probabilistic aggregate dynamical models for n identical induction machines connected to a bus, capturing the effect of different mechanical inputs to the individual machines. The authors explore model averaging and review in detail four procedures for linear models. They describe linear systems depending upon stochastic parameters, and develop a theoretical justification for a very simple and reasonably accurate averaging method. They then extend this to the nonlinear model. Finally, they use a recently introduced notion of the stochastic norm to describe a cluster of induction machines undergoing multiple simultaneous parametric variations, and obtain useful and very mildly conservative bounds on eigenstructure perturbations under multiple simultaneous parametric variations.

  20. Recent advances and applications of probabilistic topic models

    NASA Astrophysics Data System (ADS)

    Wood, Ian

    2014-12-01

    I present here an overview of recent advances in probabilistic topic modelling and related Bayesian graphical models as well as some of their more atypical applications outside of their home: text analysis. These techniques allow the modelling of high dimensional count vectors with strong correlations. With such data, simply calculating a correlation matrix is infeasible. Probabilistic topic models address this using mixtures of multinomials estimated via Bayesian inference with Dirichlet priors. The use of conjugate priors allows for efficient inference, and these techniques scale well to data sets with many millions of vectors. The first of these techniques to attract significant attention was Latent Dirichlet Allocation (LDA) [1, 2]. Numerous extensions and adaptations of LDA have been proposed: non-parametric models; assorted models incorporating authors, sentiment and other features; models regularised through the use of extra metadata or extra priors on topic structure, and many more [3]. They have become widely used in the text analysis and population genetics communities, with a number of compelling applications. These techniques are not restricted to text analysis, however, and can be applied to other types of data which can be sensibly discretised and represented as counts of labels/properties/etc. LDA and it's variants have been used to find patterns in data from diverse areas of inquiry, including genetics, plant physiology, image analysis, social network analysis, remote sensing and astrophysics. Nonetheless, it is relatively recently that probabilistic topic models have found applications outside of text analysis, and to date few such applications have been considered. I suggest that there is substantial untapped potential for topic models and models inspired by or incorporating topic models to be fruitfully applied, and outline the characteristics of systems and data for which this may be the case.

  1. Probabilistic prediction models for aggregate quarry siting

    USGS Publications Warehouse

    Robinson, G.R., Jr.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  2. A Probabilistic Model of Cross-Categorization

    ERIC Educational Resources Information Center

    Shafto, Patrick; Kemp, Charles; Mansinghka, Vikash; Tenenbaum, Joshua B.

    2011-01-01

    Most natural domains can be represented in multiple ways: we can categorize foods in terms of their nutritional content or social role, animals in terms of their taxonomic groupings or their ecological niches, and musical instruments in terms of their taxonomic categories or social uses. Previous approaches to modeling human categorization have…

  3. Parental role models, gender and educational choice.

    PubMed

    Dryler, H

    1998-09-01

    Parental role models are often put forward as an explanation for the choice of gender-atypical educational routes. This paper aims to test such explanations by examining the impact of family background variables like parental education and occupation, on choice of educational programme at upper secondary school. Using a sample of around 73,000 Swedish teenagers born between 1972 and 1976, girls' and boys' gender-atypical as well as gender-typical educational choices are analysed by means of logistic regression. Parents working or educated within a specific field increase the probability that a child will make a similar choice of educational programme at upper secondary school. This same-sector effect appeared to be somewhat stronger for fathers and sons, while no such same-sex influence was confirmed for girls. No evidence was found that, in addition to a same-sector effect, it matters whether parents' occupations represent gender-traditional or non-traditional models. Parents of the service classes or highly educated parents--expected to be the most gender egalitarian in attitudes and behaviours--have a positive influence upon children's choice of gender-atypical education. PMID:9867028

  4. Probabilistic flood damage modelling at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2014-05-01

    Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.

  5. Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304

    SciTech Connect

    Foye, Kevin C.; Soong, Te-Yang

    2012-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the waste mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this specific

  6. Probabilistic Modeling of the Renal Stone Formation Module

    NASA Technical Reports Server (NTRS)

    Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.

    2013-01-01

    The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously

  7. Modeling Choice and Valuation in Decision Experiments

    ERIC Educational Resources Information Center

    Loomes, Graham

    2010-01-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for…

  8. A probabilistic gastrointestinal tract dosimetry model

    NASA Astrophysics Data System (ADS)

    Huh, Chulhaeng

    In internal dosimetry, the tissues of the gastrointestinal (GI) tract represent one of the most radiosensitive organs of the body with the hematopoietic bone marrow. Endoscopic ultrasound is a unique tool to acquire in-vivo data on GI tract wall thicknesses of sufficient resolution needed in radiation dosimetry studies. Through their different echo texture and intensity, five layers of differing echo patterns for superficial mucosa, deep mucosa, submucosa, muscularis propria and serosa exist within the walls of organs composing the alimentary tract. Thicknesses for stomach mucosa ranged from 620 +/- 150 mum to 1320 +/- 80 mum (total stomach wall thicknesses from 2.56 +/- 0.12 to 4.12 +/- 0.11 mm). Measurements made for the rectal images revealed rectal mucosal thicknesses from 150 +/- 90 mum to 670 +/- 110 mum (total rectal wall thicknesses from 2.01 +/- 0.06 to 3.35 +/- 0.46 mm). The mucosa thus accounted for 28 +/- 3% and 16 +/- 6% of the total thickness of the stomach and rectal wall, respectively. Radiation transport simulations were then performed using the Monte Carlo N-particle transport code (MCNP) 4C transport code to calculate S values (Gy/Bq-s) for penetrating and nonpenetrating radiations such as photons, beta particles, conversion electrons and auger electrons of selected nuclides, I123, I131, Tc 99m and Y90 under two source conditions: content and mucosa sources, respectively. The results of this study demonstrate generally good agreement with published data for the stomach mucosa wall. The rectal mucosa data are consistently higher than published data compared with the large intestine due to different radiosensitive cell thicknesses (350 mum vs. a range spanning from 149 mum to 729 mum) and different geometry when a rectal content source is considered. Generally, the ICRP models have been designed to predict the amount of radiation dose in the human body from a "typical" or "reference" individual in a given population. The study has been performed to

  9. A simple probabilistic model of multibody interactions in proteins.

    PubMed

    Johansson, Kristoffer Enøe; Hamelryck, Thomas

    2013-08-01

    Protein structure prediction methods typically use statistical potentials, which rely on statistics derived from a database of know protein structures. In the vast majority of cases, these potentials involve pairwise distances or contacts between amino acids or atoms. Although some potentials beyond pairwise interactions have been described, the formulation of a general multibody potential is seen as intractable due to the perceived limited amount of data. In this article, we show that it is possible to formulate a probabilistic model of higher order interactions in proteins, without arbitrarily limiting the number of contacts. The success of this approach is based on replacing a naive table-based approach with a simple hierarchical model involving suitable probability distributions and conditional independence assumptions. The model captures the joint probability distribution of an amino acid and its neighbors, local structure and solvent exposure. We show that this model can be used to approximate the conditional probability distribution of an amino acid sequence given a structure using a pseudo-likelihood approach. We verify the model by decoy recognition and site-specific amino acid predictions. Our coarse-grained model is compared to state-of-art methods that use full atomic detail. This article illustrates how the use of simple probabilistic models can lead to new opportunities in the treatment of nonlocal interactions in knowledge-based protein structure prediction and design. PMID:23468247

  10. Probabilistic Life Cycle Cost Model for Repairable System

    NASA Astrophysics Data System (ADS)

    Nasir, Meseret; Chong, H. Y.; Osman, Sabtuni

    2015-04-01

    Traditionally, Life cycle cost (LCC) has been predicted in a deterministic approach, however; this method is not capable to consider the uncertainties in the input variables. In this paper, a probabilistic approach using Adaptive network-based fuzzy inference system (ANFIS) is proposed to estimate the LCC of repairable systems. The developed model could handle the uncertainties of input variables in the estimation of LCC. The numerical analysis shows that the acquisition and downtime cost could have a high effect towards the LCC compared to repair cost. The developed model could also provide more precise quantitative information for decision making process.

  11. Probabilistic Cross-matching of Radio Catalogs with Geometric Models

    NASA Astrophysics Data System (ADS)

    Fan, D.; Budavári, T.

    2014-05-01

    Cross-matching radio is different from that in the optical. Radio sources can have multiple corresponding detections, the core and its lobes, which makes identification and cross-identification to other catalogs much more difficult. Traditionally, these cases have been handled manually, with researchers looking at the possible candidates; this will not be possible for the upcoming radio ultimately leading to the Square Kilometer Array. We present a probabilistic method that can automatically associate radio sources by explicitly modeling their morphology. Our preliminary results based on a simple straight-line model seem to be on par with the manual associations.

  12. Probabilistic modeling of financial exposure to flood in France

    NASA Astrophysics Data System (ADS)

    Moncoulon, David; Quantin, Antoine; Leblois, Etienne

    2014-05-01

    CCR is a French reinsurance company which offers natural catastrophe covers with the State guarantee. Within this framework, CCR develops its own models to assess its financial exposure to floods, droughts, earthquakes and other perils, and thus the exposure of insurers and the French State. A probabilistic flood model has been developed in order to estimate the financial exposure of the Nat Cat insurance market to flood events, depending on their annual occurrence probability. This presentation is organized in two parts. The first part is dedicated to the development of a flood hazard and damage model (ARTEMIS). The model calibration and validation on historical events are then described. In the second part, the coupling of ARTEMIS with two generators of probabilistic events is achieved: a stochastic flow generator and a stochastic spatialized precipitation generator, adapted from the SAMPO model developed by IRSTEA. The analysis of the complementary nature of these two generators is proposed: the first one allows generating floods on the French hydrological station network; the second allows simulating surface water runoff and Small River floods, even on ungauged rivers. Thus, the simulation of thousands of non-occured, but possible events allows us to provide for the first time an estimate of the financial exposure to flooding in France at different scales (commune, department, country) and from different points of view (hazard, vulnerability and damages).

  13. Probabilistic updating of building models using incomplete modal data

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Büyüköztürk, Oral

    2016-06-01

    This paper investigates a new probabilistic strategy for Bayesian model updating using incomplete modal data. Direct mode matching between the measured and the predicted modal quantities is not required in the updating process, which is realized through model reduction. A Markov chain Monte Carlo technique with adaptive random-walk steps is proposed to draw the samples for model parameter uncertainty quantification. The iterated improved reduced system technique is employed to update the prediction error as well as to calculate the likelihood function in the sampling process. Since modal quantities are used in the model updating, modal identification is first carried out to extract the natural frequencies and mode shapes through the acceleration measurements of the structural system. The proposed algorithm is finally validated by both numerical and experimental examples: a 10-storey building with synthetic data and a 8-storey building with shaking table test data. Results illustrate that the proposed algorithm is effective and robust for parameter uncertainty quantification in probabilistic model updating of buildings.

  14. Probabilistic/Fracture-Mechanics Model For Service Life

    NASA Technical Reports Server (NTRS)

    Watkins, T., Jr.; Annis, C. G., Jr.

    1991-01-01

    Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.

  15. A probabilistic model to liquefaction assessment of dams

    SciTech Connect

    Simos, N.; Costantino, C.J.; Reich, M.

    1995-03-01

    In an effort to evaluate earthquake liquefaction potential of soil media, several statistical models ranging from purely empirical to mathematically sophisticated have been devised. While deterministic methods define susceptibility of a soil structure to liquefaction, for a given seismic event, in the sense that the site does or does not liquefy, probabilistic approaches incorporate statistical properties associated with both the earthquake and site characterization. In this study a stochastic model is formulated to assess liquefaction potential of soil structures in general and earth dams in particular induced by earthquakes. Such earthquakes are realizations of a random process expressed in the form of a power spectral density. Uncertainties in the soil resistance to liquefaction are also introduced with probability density functions around in-situ measurements of parameters associated with the soil strength. The attempt of this study is to devise a procedure that will lead to a continuous probability of liquefaction at a given site. Monte Carlo simulations are employed for the probabilistic model. In addition a stochastic model is presented. The dynamic response of the two-phase medium is obtained with the help of the POROSLAM code and it is expressed in the form of a transfer function (Unit Response).

  16. A Probabilistic Model for Simulating Magnetoacoustic Emission Responses in Ferromagnets

    NASA Technical Reports Server (NTRS)

    Namkung, M.; Fulton, J. P.; Wincheski, B.

    1993-01-01

    Magnetoacoustic emission (MAE) is a phenomenon where acoustic noise is generated due to the motion of non-180 magnetic domain walls in a ferromagnet with non-zero magnetostrictive constants. MAE has been studied extensively for many years and has even been applied as an NDE tool for characterizing the heat treatment of high-yield low carbon steels. A complete theory which fully accounts for the magnetoacoustic response, however, has not yet emerged. The motion of the domain walls appears to be a totally random process, however, it does exhibit features of regularity which have been identified by studying phenomena such as 1/f flicker noise and self-organized criticality (SOC). In this paper, a probabilistic model incorporating the effects of SOC has been developed to help explain the MAE response. The model uses many simplifying assumptions yet yields good qualitative agreement with observed experimental results and also provides some insight into the possible underlying mechanisms responsible for MAE. We begin by providing a brief overview of magnetoacoustic emission and the experimental set-up used to obtain the MAE signal. We then describe a pseudo-probabilistic model used to predict the MAE response and give an example of the predicted result. Finally, the model is modified to account for SOC and the new predictions are shown and compared with experiment.

  17. Probabilistic climate change predictions applying Bayesian model averaging.

    PubMed

    Min, Seung-Ki; Simonis, Daniel; Hense, Andreas

    2007-08-15

    This study explores the sensitivity of probabilistic predictions of the twenty-first century surface air temperature (SAT) changes to different multi-model averaging methods using available simulations from the Intergovernmental Panel on Climate Change fourth assessment report. A way of observationally constrained prediction is provided by training multi-model simulations for the second half of the twentieth century with respect to long-term components. The Bayesian model averaging (BMA) produces weighted probability density functions (PDFs) and we compare two methods of estimating weighting factors: Bayes factor and expectation-maximization algorithm. It is shown that Bayesian-weighted PDFs for the global mean SAT changes are characterized by multi-modal structures from the middle of the twenty-first century onward, which are not clearly seen in arithmetic ensemble mean (AEM). This occurs because BMA tends to select a few high-skilled models and down-weight the others. Additionally, Bayesian results exhibit larger means and broader PDFs in the global mean predictions than the unweighted AEM. Multi-modality is more pronounced in the continental analysis using 30-year mean (2070-2099) SATs while there is only a little effect of Bayesian weighting on the 5-95% range. These results indicate that this approach to observationally constrained probabilistic predictions can be highly sensitive to the method of training, particularly for the later half of the twenty-first century, and that a more comprehensive approach combining different regions and/or variables is required. PMID:17569647

  18. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  19. Probabilistic assessment of agricultural droughts using graphical models

    NASA Astrophysics Data System (ADS)

    Ramadas, Meenu; Govindaraju, Rao S.

    2015-07-01

    Agricultural droughts are often characterized by soil moisture in the root zone of the soil, but crop needs are rarely factored into the analysis. Since water needs vary with crops, agricultural drought incidences in a region can be characterized better if crop responses to soil water deficits are also accounted for in the drought index. This study investigates agricultural droughts driven by plant stress due to soil moisture deficits using crop stress functions available in the literature. Crop water stress is assumed to begin at the soil moisture level corresponding to incipient stomatal closure, and reaches its maximum at the crop's wilting point. Using available location-specific crop acreage data, a weighted crop water stress function is computed. A new probabilistic agricultural drought index is then developed within a hidden Markov model (HMM) framework that provides model uncertainty in drought classification and accounts for time dependence between drought states. The proposed index allows probabilistic classification of the drought states and takes due cognizance of the stress experienced by the crop due to soil moisture deficit. The capabilities of HMM model formulations for assessing agricultural droughts are compared to those of current drought indices such as standardized precipitation evapotranspiration index (SPEI) and self-calibrating Palmer drought severity index (SC-PDSI). The HMM model identified critical drought events and several drought occurrences that are not detected by either SPEI or SC-PDSI, and shows promise as a tool for agricultural drought studies.

  20. Probabilistic logic modeling of network reliability for hybrid network architectures

    SciTech Connect

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-10-01

    Sandia National Laboratories has found that the reliability and failure modes of current-generation network technologies can be effectively modeled using fault tree-based probabilistic logic modeling (PLM) techniques. We have developed fault tree models that include various hierarchical networking technologies and classes of components interconnected in a wide variety of typical and atypical configurations. In this paper we discuss the types of results that can be obtained from PLMs and why these results are of great practical value to network designers and analysts. After providing some mathematical background, we describe the `plug-and-play` fault tree analysis methodology that we have developed for modeling connectivity and the provision of network services in several current- generation network architectures. Finally, we demonstrate the flexibility of the method by modeling the reliability of a hybrid example network that contains several interconnected ethernet, FDDI, and token ring segments. 11 refs., 3 figs., 1 tab.

  1. Efficient diagnosis of multiprocessor systems under probabilistic models

    NASA Technical Reports Server (NTRS)

    Blough, Douglas M.; Sullivan, Gregory F.; Masson, Gerald M.

    1989-01-01

    The problem of fault diagnosis in multiprocessor systems is considered under a probabilistic fault model. The focus is on minimizing the number of tests that must be conducted in order to correctly diagnose the state of every processor in the system with high probability. A diagnosis algorithm that can correctly diagnose the state of every processor with probability approaching one in a class of systems performing slightly greater than a linear number of tests is presented. A nearly matching lower bound on the number of tests required to achieve correct diagnosis in arbitrary systems is also proven. Lower and upper bounds on the number of tests required for regular systems are also presented. A class of regular systems which includes hypercubes is shown to be correctly diagnosable with high probability. In all cases, the number of tests required under this probabilistic model is shown to be significantly less than under a bounded-size fault set model. Because the number of tests that must be conducted is a measure of the diagnosis overhead, these results represent a dramatic improvement in the performance of system-level diagnosis techniques.

  2. Modeling of human artery tissue with probabilistic approach.

    PubMed

    Xiong, Linfei; Chui, Chee-Kong; Fu, Yabo; Teo, Chee-Leong; Li, Yao

    2015-04-01

    Accurate modeling of biological soft tissue properties is vital for realistic medical simulation. Mechanical response of biological soft tissue always exhibits a strong variability due to the complex microstructure and different loading conditions. The inhomogeneity in human artery tissue is modeled with a computational probabilistic approach by assuming that the instantaneous stress at a specific strain varies according to normal distribution. Material parameters of the artery tissue which are modeled with a combined logarithmic and polynomial energy equation are represented by a statistical function with normal distribution. Mean and standard deviation of the material parameters are determined using genetic algorithm (GA) and inverse mean-value first-order second-moment (IMVFOSM) method, respectively. This nondeterministic approach was verified using computer simulation based on the Monte-Carlo (MC) method. Cumulative distribution function (CDF) of the MC simulation corresponds well with that of the experimental stress-strain data and the probabilistic approach is further validated using data from other studies. By taking into account the inhomogeneous mechanical properties of human biological tissue, the proposed method is suitable for realistic virtual simulation as well as an accurate computational approach for medical device validation. PMID:25748681

  3. Probabilistic delay differential equation modeling of event-related potentials.

    PubMed

    Ostwald, Dirk; Starke, Ludger

    2016-08-01

    "Dynamic causal models" (DCMs) are a promising approach in the analysis of functional neuroimaging data due to their biophysical interpretability and their consolidation of functional-segregative and functional-integrative propositions. In this theoretical note we are concerned with the DCM framework for electroencephalographically recorded event-related potentials (ERP-DCM). Intuitively, ERP-DCM combines deterministic dynamical neural mass models with dipole-based EEG forward models to describe the event-related scalp potential time-series over the entire electrode space. Since its inception, ERP-DCM has been successfully employed to capture the neural underpinnings of a wide range of neurocognitive phenomena. However, in spite of its empirical popularity, the technical literature on ERP-DCM remains somewhat patchy. A number of previous communications have detailed certain aspects of the approach, but no unified and coherent documentation exists. With this technical note, we aim to close this gap and to increase the technical accessibility of ERP-DCM. Specifically, this note makes the following novel contributions: firstly, we provide a unified and coherent review of the mathematical machinery of the latent and forward models constituting ERP-DCM by formulating the approach as a probabilistic latent delay differential equation model. Secondly, we emphasize the probabilistic nature of the model and its variational Bayesian inversion scheme by explicitly deriving the variational free energy function in terms of both the likelihood expectation and variance parameters. Thirdly, we detail and validate the estimation of the model with a special focus on the explicit form of the variational free energy function and introduce a conventional nonlinear optimization scheme for its maximization. Finally, we identify and discuss a number of computational issues which may be addressed in the future development of the approach. PMID:27114057

  4. Probabilistic multi-scale modeling of pathogen dynamics in rivers

    NASA Astrophysics Data System (ADS)

    Packman, A. I.; Drummond, J. D.; Aubeneau, A. F.

    2014-12-01

    Most parameterizations of microbial dynamics and pathogen transport in surface waters rely on classic assumptions of advection-diffusion behavior in the water column and limited interactions between the water column and sediments. However, recent studies have shown that strong surface-subsurface interactions produce a wide range of transport timescales in rivers, and greatly the opportunity for long-term retention of pathogens in sediment beds and benthic biofilms. We present a stochastic model for pathogen dynamics, based on continuous-time random walk theory, that properly accounts for such diverse transport timescales, along with the remobilization and inactivation of pathogens in storage reservoirs. By representing pathogen dynamics probabilistically, the model framework enables diverse local-scale processes to be incorporated in system-scale models. We illustrate the application of the model to microbial dynamics in rivers based on the results of a tracer injection experiment. In-stream transport and surface-subsurface interactions are parameterized based on observations of conservative tracer transport, while E. coli retention and inactivation in sediments is parameterized based on direct local-scale experiments. The results indicate that sediments are an important reservoir of enteric organisms in rivers, and slow remobilization from sediments represents a long-term source of bacteria to streams. Current capability, potential advances, and limitations of this model framework for assessing pathogen transmission risks will be discussed. Because the transport model is probabilistic, it is amenable to incorporation into risk models, but a lack of characterization of key microbial processes in sediments and benthic biofilms hinders current application.

  5. Probabilistic Modeling of Aircraft Trajectories for Dynamic Separation Volumes

    NASA Technical Reports Server (NTRS)

    Lewis, Timothy A.

    2016-01-01

    With a proliferation of new and unconventional vehicles and operations expected in the future, the ab initio airspace design will require new approaches to trajectory prediction for separation assurance and other air traffic management functions. This paper presents an approach to probabilistic modeling of the trajectory of an aircraft when its intent is unknown. The approach uses a set of feature functions to constrain a maximum entropy probability distribution based on a set of observed aircraft trajectories. This model can be used to sample new aircraft trajectories to form an ensemble reflecting the variability in an aircraft's intent. The model learning process ensures that the variability in this ensemble reflects the behavior observed in the original data set. Computational examples are presented.

  6. Coverage in Wireless Sensor Network Based on Probabilistic Sensing Model

    NASA Astrophysics Data System (ADS)

    Li, Fen; Deng, Kai; Meng, Fanzhi; Weiyan, Zhang

    One of the major problem to consider in designing wireless sensor network is how to extend the network lifetime and provide desired quality of service. To achieve this, a broadly-used method is topology control. This paper studies the problem of how to ensure the network fully connected without nodes' location information. The coverage control model based on probabilistic sensing model is proposed in this paper. With random sensor deployment, the sensing node and communicating node number can be calculated based on the size of sensing region and the performance parameters of node (e.g., node sensing radius, communication radius, and so on). Simulation results show that the actual coverage quality provided by sensing nodes scheduled with the proposed coverage control model is higher than the threshold of coverage quality.

  7. Binary Encoded-Prototype Tree for Probabilistic Model Building GP

    NASA Astrophysics Data System (ADS)

    Yanase, Toshihiko; Hasegawa, Yoshihiko; Iba, Hitoshi

    In recent years, program evolution algorithms based on the estimation of distribution algorithm (EDA) have been proposed to improve search ability of genetic programming (GP) and to overcome GP-hard problems. One such method is the probabilistic prototype tree (PPT) based algorithm. The PPT based method explores the optimal tree structure by using the full tree whose number of child nodes is maximum among possible trees. This algorithm, however, suffers from problems arising from function nodes having different number of child nodes. These function nodes cause intron nodes, which do not affect the fitness function. Moreover, the function nodes having many child nodes increase the search space and the number of samples necessary for properly constructing the probabilistic model. In order to solve this problem, we propose binary encoding for PPT. In this article, we convert each function node to a subtree of binary nodes where the converted tree is correct in grammar. Our method reduces ineffectual search space, and the binary encoded tree is able to express the same tree structures as the original method. The effectiveness of the proposed method is demonstrated through the use of two computational experiments.

  8. Choice.

    PubMed

    Greenberg, Jay

    2008-09-01

    Understanding how and why analysands make the choices they do is central to both the clinical and the theoretical projects of psychoanalysis. And yet we know very little about the process of choice or about the relationship between choices and motives. A striking parallel is to be found between the ways choice is narrated in ancient Greek texts and the experience of analysts as they observe patients making choices in everyday clinical work. Pursuing this convergence of classical and contemporary sensibilities will illuminate crucial elements of the various meanings of choice, and of the way that these meanings change over the course of psychoanalytic treatment. PMID:18802123

  9. Model for understanding consumer textural food choice.

    PubMed

    Jeltema, Melissa; Beckley, Jacqueline; Vahalik, Jennifer

    2015-05-01

    The current paradigm for developing products that will match the marketing messaging is flawed because the drivers of product choice and satisfaction based on texture are misunderstood. Qualitative research across 10 years has led to the thesis explored in this research that individuals have a preferred way to manipulate food in their mouths (i.e., mouth behavior) and that this behavior is a major driver of food choice, satisfaction, and the desire to repurchase. Texture, which is currently thought to be a major driver of product choice, is a secondary factor, and is important only in that it supports the primary driver-mouth behavior. A model for mouth behavior is proposed and the qualitative research supporting the identification of different mouth behaviors is presented. The development of a trademarked typing tool for characterizing mouth behavior is described along with quantitative substantiation of the tool's ability to group individuals by mouth behavior. The use of these four groups to understand textural preferences and the implications for a variety of areas including product design and weight management are explored. PMID:25987995

  10. Model for understanding consumer textural food choice

    PubMed Central

    Jeltema, Melissa; Beckley, Jacqueline; Vahalik, Jennifer

    2015-01-01

    The current paradigm for developing products that will match the marketing messaging is flawed because the drivers of product choice and satisfaction based on texture are misunderstood. Qualitative research across 10 years has led to the thesis explored in this research that individuals have a preferred way to manipulate food in their mouths (i.e., mouth behavior) and that this behavior is a major driver of food choice, satisfaction, and the desire to repurchase. Texture, which is currently thought to be a major driver of product choice, is a secondary factor, and is important only in that it supports the primary driver—mouth behavior. A model for mouth behavior is proposed and the qualitative research supporting the identification of different mouth behaviors is presented. The development of a trademarked typing tool for characterizing mouth behavior is described along with quantitative substantiation of the tool's ability to group individuals by mouth behavior. The use of these four groups to understand textural preferences and the implications for a variety of areas including product design and weight management are explored. PMID:25987995

  11. A probabilistic model of a porous heat exchanger

    NASA Technical Reports Server (NTRS)

    Agrawal, O. P.; Lin, X. A.

    1995-01-01

    This paper presents a probabilistic one-dimensional finite element model for heat transfer processes in porous heat exchangers. The Galerkin approach is used to develop the finite element matrices. Some of the submatrices are asymmetric due to the presence of the flow term. The Neumann expansion is used to write the temperature distribution as a series of random variables, and the expectation operator is applied to obtain the mean and deviation statistics. To demonstrate the feasibility of the formulation, a one-dimensional model of heat transfer phenomenon in superfluid flow through a porous media is considered. Results of this formulation agree well with the Monte-Carlo simulations and the analytical solutions. Although the numerical experiments are confined to parametric random variables, a formulation is presented to account for the random spatial variations.

  12. Applications of the International Space Station Probabilistic Risk Assessment Model

    NASA Technical Reports Server (NTRS)

    Grant, Warren; Lutomski, Michael G.

    2011-01-01

    Recently the International Space Station (ISS) has incorporated more Probabilistic Risk Assessments (PRAs) in the decision making process for significant issues. Future PRAs will have major impact to ISS and future spacecraft development and operations. These PRAs will have their foundation in the current complete ISS PRA model and the current PRA trade studies that are being analyzed as requested by ISS Program stakeholders. ISS PRAs have recently helped in the decision making process for determining reliability requirements for future NASA spacecraft and commercial spacecraft, making crew rescue decisions, as well as making operational requirements for ISS orbital orientation, planning Extravehicular activities (EVAs) and robotic operations. This paper will describe some applications of the ISS PRA model and how they impacted the final decision. This paper will discuss future analysis topics such as life extension, requirements of new commercial vehicles visiting ISS.

  13. Use of probabilistic inversion to model qualitative expert input when selecting a new nuclear reactor technology

    NASA Astrophysics Data System (ADS)

    Merritt, Charles R., Jr.

    Complex investment decisions by corporate executives often require the comparison of dissimilar attributes and competing technologies. A technique to evaluate qualitative input from experts using a Multi-Criteria Decision Method (MCDM) is described to select a new reactor technology for a merchant nuclear generator. The high capital cost, risks from design, licensing and construction, reactor safety and security considerations are some of the diverse considerations when choosing a reactor design. Three next generation reactor technologies are examined: the Advanced Pressurized-1000 (AP-1000) from Westinghouse, Economic Simplified Boiling Water Reactor (ESBWR) from General Electric, and the U.S. Evolutionary Power Reactor (U.S. EPR) from AREVA. Recent developments in MCDM and decision support systems are described. The uncertainty inherent in experts' opinions for the attribute weighting in the MCDM is modeled through the use of probabilistic inversion. In probabilistic inversion, a function is inverted into a random variable within a defined range. Once the distribution is created, random samples based on the distribution are used to perform a sensitivity analysis on the decision results to verify the "strength" of the results. The decision results for the pool of experts identified the U.S. EPR as the optimal choice.

  14. De novo protein conformational sampling using a probabilistic graphical model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-11-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/.

  15. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    PubMed

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-01-01

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches. PMID:27347956

  16. De novo protein conformational sampling using a probabilistic graphical model

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2015-01-01

    Efficient exploration of protein conformational space remains challenging especially for large proteins when assembling discretized structural fragments extracted from a protein structure data database. We propose a fragment-free probabilistic graphical model, FUSION, for conformational sampling in continuous space and assess its accuracy using ‘blind’ protein targets with a length up to 250 residues from the CASP11 structure prediction exercise. The method reduces sampling bottlenecks, exhibits strong convergence, and demonstrates better performance than the popular fragment assembly method, ROSETTA, on relatively larger proteins with a length of more than 150 residues in our benchmark set. FUSION is freely available through a web server at http://protein.rnet.missouri.edu/FUSION/. PMID:26541939

  17. Modeling Characteristics of an Operational Probabilistic Safety Assessment (PSA)

    SciTech Connect

    Anoba, Richard C.; Khalil, Yehia; Fluehr, J.J. III; Kellogg, Richard; Hackerott, Alan

    2002-07-01

    Probabilistic Safety Assessments (PSAs) are increasingly being used as a tool for supporting the acceptability of design, procurement, construction, operation, and maintenance activities at nuclear power plants. Since the issuance of Generic Letter 88-20 and subsequent Individual Plant Examinations (IPEs)/Individual Plant Examinations for External Events (IPEEEs), the NRC has issued several Regulatory Guides such as RG 1.182 to describe the use of PSA in risk informed regulation activities. The PSA models developed for the IPEs were typically based on a 'snapshot' of the the risk profile at the nuclear power plant. The IPE models contain implicit assumptions and simplifications that limit the ability to realistically assess current issues. For example, IPE modeling assumptions related to plant configuration limit the ability to perform online equipment out-of-service assessments. The lack of model symmetry results in skewed risk results. IPE model simplifications related to initiating events have resulted in non-conservative estimates of risk impacts when equipment is removed from service. The IPE models also do not explicitly address all external events that are potentially risk significant as equipment is removed from service. (authors)

  18. Probabilistic models for creep-fatigue in a steel alloy

    NASA Astrophysics Data System (ADS)

    Ibisoglu, Fatmagul

    In high temperature components subjected to long term cyclic operation, simultaneous creep and fatigue damage occur. A new methodology for creep-fatigue life assessment has been adopted without the need to separate creep and fatigue damage or expended life. Probabilistic models, described by hold times in tension and total strain range at temperature, have been derived based on the creep rupture behavior of a steel alloy. These models have been validated with the observed creep-fatigue life of the material with a scatter band close to a factor of 2. Uncertainties of the creep-fatigue model parameters have been estimated with WinBUGS which is an open source Bayesian analysis software tool that uses Markov Chain Monte Carlo method to fit statistical models. Secondly, creep deformation in stress relaxation data has been analyzed. Well performing creep equations have been validated with the observed data. The creep model with the highest goodness of fit among the validated models has been used to estimate probability of exceedance at 0.6% strain level for the steel alloy.

  19. Probabilistic models for reactive behaviour in heterogeneous condensed phase media

    NASA Astrophysics Data System (ADS)

    Baer, M. R.; Gartling, D. K.; DesJardin, P. E.

    2012-02-01

    This work presents statistically-based models to describe reactive behaviour in heterogeneous energetic materials. Mesoscale effects are incorporated in continuum-level reactive flow descriptions using probability density functions (pdfs) that are associated with thermodynamic and mechanical states. A generalised approach is presented that includes multimaterial behaviour by treating the volume fraction as a random kinematic variable. Model simplifications are then sought to reduce the complexity of the description without compromising the statistical approach. Reactive behaviour is first considered for non-deformable media having a random temperature field as an initial state. A pdf transport relationship is derived and an approximate moment approach is incorporated in finite element analysis to model an example application whereby a heated fragment impacts a reactive heterogeneous material which leads to a delayed cook-off event. Modelling is then extended to include deformation effects associated with shock loading of a heterogeneous medium whereby random variables of strain, strain-rate and temperature are considered. A demonstrative mesoscale simulation of a non-ideal explosive is discussed that illustrates the joint statistical nature of the strain and temperature fields during shock loading to motivate the probabilistic approach. This modelling is derived in a Lagrangian framework that can be incorporated in continuum-level shock physics analysis. Future work will consider particle-based methods for a numerical implementation of this modelling approach.

  20. Probabilistic modeling of scene dynamics for applications in visual surveillance.

    PubMed

    Saleemi, Imran; Shafique, Khurram; Shah, Mubarak

    2009-08-01

    We propose a novel method to model and learn the scene activity, observed by a static camera. The proposed model is very general and can be applied for solution of a variety of problems. The motion patterns of objects in the scene are modeled in the form of a multivariate nonparametric probability density function of spatiotemporal variables (object locations and transition times between them). Kernel Density Estimation is used to learn this model in a completely unsupervised fashion. Learning is accomplished by observing the trajectories of objects by a static camera over extended periods of time. It encodes the probabilistic nature of the behavior of moving objects in the scene and is useful for activity analysis applications, such as persistent tracking and anomalous motion detection. In addition, the model also captures salient scene features, such as the areas of occlusion and most likely paths. Once the model is learned, we use a unified Markov Chain Monte Carlo (MCMC)-based framework for generating the most likely paths in the scene, improving foreground detection, persistent labeling of objects during tracking, and deciding whether a given trajectory represents an anomaly to the observed motion patterns. Experiments with real-world videos are reported which validate the proposed approach. PMID:19542580

  1. Detection and characterization of regulatory elements using probabilistic conditional random field and hidden Markov models.

    PubMed

    Wang, Hongyan; Zhou, Xiaobo

    2013-04-01

    By altering the electrostatic charge of histones or providing binding sites to protein recognition molecules, Chromatin marks have been proposed to regulate gene expression, a property that has motivated researchers to link these marks to cis-regulatory elements. With the help of next generation sequencing technologies, we can now correlate one specific chromatin mark with regulatory elements (e.g. enhancers or promoters) and also build tools, such as hidden Markov models, to gain insight into mark combinations. However, hidden Markov models have limitation for their character of generative models and assume that a current observation depends only on a current hidden state in the chain. Here, we employed two graphical probabilistic models, namely the linear conditional random field model and multivariate hidden Markov model, to mark gene regions with different states based on recurrent and spatially coherent character of these eight marks. Both models revealed chromatin states that may correspond to enhancers and promoters, transcribed regions, transcriptional elongation, and low-signal regions. We also found that the linear conditional random field model was more effective than the hidden Markov model in recognizing regulatory elements, such as promoter-, enhancer-, and transcriptional elongation-associated regions, which gives us a better choice. PMID:23237214

  2. Probabilistic model for quick detection of dissimilar binary images

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2015-09-01

    We present a quick method to detect dissimilar binary images. The method is based on a "probabilistic matching model" for image matching. The matching model is used to predict the probability of occurrence of distinct-dissimilar image pairs (completely different images) when matching one image to another. Based on this model, distinct-dissimilar images can be detected by matching only a few points between two images with high confidence, namely 11 points for a 99.9% successful detection rate. For image pairs that are dissimilar but not distinct-dissimilar, more points need to be mapped. The number of points required to attain a certain successful detection rate or confidence depends on the amount of similarity between the compared images. As this similarity increases, more points are required. For example, images that differ by 1% can be detected by mapping fewer than 70 points on average. More importantly, the model is image size invariant; so, images of any sizes will produce high confidence levels with a limited number of matched points. As a result, this method does not suffer from the image size handicap that impedes current methods. We report on extensive tests conducted on real images of different sizes.

  3. Probabilistic modeling of solidification grain structure in investment castings

    SciTech Connect

    Upadhya, G.K.; Yu, K.O.; Layton, M.A.; Paul, A.J.

    1995-12-31

    A probabilistic approach for modeling the evolution of grain structure in investment castings has been developed. The approach differs from the classical Monte Carlo simulations of microstructural evolution in that it uses the results from a heat transfer simulation of the investment casting process for determining the probabilities of nucleation and growth. The model was used to predict the solidification grain structure in castings. The model is quasi-3D, since it uses the information from a 3D simulation of heat transfer to predict the grain structure developed in any 2D-section of the casting. Structural transitions such as columnar/equiaxed transition can also be predicted, using suitable transition criteria. Results from the model have been validated by comparison with actual micrographs from experimental investment castings. In the first case, simulations were performed for a simple plate shaped casting of superalloy Rene 77. The effects of mold insulation as well as metal pour and mold preheat temperatures on the grain size of the casting were studied. In the second example, the casting of a complex-shaped jet engine component made of superalloy IN718 was simulated. Simulation results were seen to match very well with experiments.

  4. Modelling circumplanetary ejecta clouds at low altitudes: A probabilistic approach

    NASA Astrophysics Data System (ADS)

    Christou, Apostolos A.

    2015-04-01

    A model is presented of a ballistic, collisionless, steady state population of ejecta launched at randomly distributed times and velocities and moving under constant gravity above the surface of an airless planetary body. Within a probabilistic framework, closed form solutions are derived for the probability density functions of the altitude distribution of particles, the distribution of their speeds in a rest frame both at the surface and at altitude and with respect to a moving platform such as an orbiting spacecraft. These expressions are validated against numerically-generated synthetic populations of ejecta under lunar surface gravity. The model is applied to the cases where the ejection speed distribution is (a) uniform (b) a power law. For the latter law, it is found that the effective scale height of the ejecta envelope directly depends on the exponent of the power law and increases with altitude. The same holds for the speed distribution of particles near the surface. Ejection model parameters can, therefore, be constrained through orbital and surface measurements. The scope of the model is then extended to include size-dependency of the ejection speed and an example worked through for a deterministic power law relation. The result suggests that the height distribution of ejecta is a sensitive proxy for this dependency.

  5. Probabilistic models to describe the dynamics of migrating microbial communities.

    PubMed

    Schroeder, Joanna L; Lunn, Mary; Pinto, Ameet J; Raskin, Lutgarde; Sloan, William T

    2015-01-01

    In all but the most sterile environments bacteria will reside in fluid being transported through conduits and some of these will attach and grow as biofilms on the conduit walls. The concentration and diversity of bacteria in the fluid at the point of delivery will be a mix of those when it entered the conduit and those that have become entrained into the flow due to seeding from biofilms. Examples include fluids through conduits such as drinking water pipe networks, endotracheal tubes, catheters and ventilation systems. Here we present two probabilistic models to describe changes in the composition of bulk fluid microbial communities as they are transported through a conduit whilst exposed to biofilm communities. The first (discrete) model simulates absolute numbers of individual cells, whereas the other (continuous) model simulates the relative abundance of taxa in the bulk fluid. The discrete model is founded on a birth-death process whereby the community changes one individual at a time and the numbers of cells in the system can vary. The continuous model is a stochastic differential equation derived from the discrete model and can also accommodate changes in the carrying capacity of the bulk fluid. These models provide a novel Lagrangian framework to investigate and predict the dynamics of migrating microbial communities. In this paper we compare the two models, discuss their merits, possible applications and present simulation results in the context of drinking water distribution systems. Our results provide novel insight into the effects of stochastic dynamics on the composition of non-stationary microbial communities that are exposed to biofilms and provides a new avenue for modelling microbial dynamics in systems where fluids are being transported. PMID:25803866

  6. The Gain-Loss Model: A Probabilistic Skill Multimap Model for Assessing Learning Processes

    ERIC Educational Resources Information Center

    Robusto, Egidio; Stefanutti, Luca; Anselmi, Pasquale

    2010-01-01

    Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…

  7. Probabilistic constitutive relationships for material strength degradation models

    NASA Technical Reports Server (NTRS)

    Boyce, L.; Chamis, C. C.

    1989-01-01

    In the present probabilistic methodology for the strength of aerospace propulsion system structural components subjected to such environmentally-induced primitive variables as loading stresses, high temperature, chemical corrosion, and radiation, time is encompassed as an interacting element, allowing the projection of creep and fatigue effects. A probabilistic constitutive equation is postulated to account for the degradation of strength due to these primitive variables which may be calibrated by an appropriately curve-fitted least-squares multiple regression of experimental data. The resulting probabilistic constitutive equation is embodied in the PROMISS code for aerospace propulsion component random strength determination.

  8. A Probabilistic Palimpsest Model of Visual Short-term Memory

    PubMed Central

    Matthey, Loic; Bays, Paul M.; Dayan, Peter

    2015-01-01

    Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ. PMID:25611204

  9. Probabilistic consequence model of accidenal or intentional chemical releases.

    SciTech Connect

    Chang, Y.-S.; Samsa, M. E.; Folga, S. M.; Hartmann, H. M.

    2008-06-02

    In this work, general methodologies for evaluating the impacts of large-scale toxic chemical releases are proposed. The potential numbers of injuries and fatalities, the numbers of hospital beds, and the geographical areas rendered unusable during and some time after the occurrence and passage of a toxic plume are estimated on a probabilistic basis. To arrive at these estimates, historical accidental release data, maximum stored volumes, and meteorological data were used as inputs into the SLAB accidental chemical release model. Toxic gas footprints from the model were overlaid onto detailed population and hospital distribution data for a given region to estimate potential impacts. Output results are in the form of a generic statistical distribution of injuries and fatalities associated with specific toxic chemicals and regions of the United States. In addition, indoor hazards were estimated, so the model can provide contingency plans for either shelter-in-place or evacuation when an accident occurs. The stochastic distributions of injuries and fatalities are being used in a U.S. Department of Homeland Security-sponsored decision support system as source terms for a Monte Carlo simulation that evaluates potential measures for mitigating terrorist threats. This information can also be used to support the formulation of evacuation plans and to estimate damage and cleanup costs.

  10. Spatial polychaeta habitat potential mapping using probabilistic models

    NASA Astrophysics Data System (ADS)

    Choi, Jong-Kuk; Oh, Hyun-Joo; Koo, Bon Joo; Ryu, Joo-Hyung; Lee, Saro

    2011-06-01

    The purpose of this study was to apply probabilistic models to the mapping of the potential polychaeta habitat area in the Hwangdo tidal flat, Korea. Remote sensing techniques were used to construct spatial datasets of ecological environments and field observations were carried out to determine the distribution of macrobenthos. Habitat potential mapping was achieved for two polychaeta species, Prionospio japonica and Prionospio pulchra, and eight control factors relating to the tidal macrobenthos distribution were selected. These included the intertidal digital elevation model (DEM), slope, aspect, tidal exposure duration, distance from tidal channels, tidal channel density, spectral reflectance of the near infrared (NIR) bands and surface sedimentary facies from satellite imagery. The spatial relationships between the polychaeta species and each control factor were calculated using a frequency ratio and weights-of-evidence combined with geographic information system (GIS) data. The species were randomly divided into a training set (70%) to analyze habitat potential using frequency ratio and weights-of-evidence, and a test set (30%) to verify the predicted habitat potential map. The relationships were overlaid to produce a habitat potential map with a polychaeta habitat potential (PHP) index value. These maps were verified by comparing them to surveyed habitat locations such as the verification data set. For the verification results, the frequency ratio model showed prediction accuracies of 77.71% and 74.87% for P. japonica and P. pulchra, respectively, while those for the weights-of-evidence model were 64.05% and 62.95%. Thus, the frequency ratio model provided a more accurate prediction than the weights-of-evidence model. Our data demonstrate that the frequency ratio and weights-of-evidence models based upon GIS analysis are effective for generating habitat potential maps of polychaeta species in a tidal flat. The results of this study can be applied towards

  11. Predicting coastal cliff erosion using a Bayesian probabilistic model

    USGS Publications Warehouse

    Hapke, C.; Plant, N.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70-90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale. ?? 2010.

  12. Performance and Probabilistic Verification of Regional Parameter Estimates for Conceptual Rainfall-runoff Models

    NASA Astrophysics Data System (ADS)

    Franz, K.; Hogue, T.; Barco, J.

    2007-12-01

    Identification of appropriate parameter sets for simulation of streamflow in ungauged basins has become a significant challenge for both operational and research hydrologists. This is especially difficult in the case of conceptual models, when model parameters typically must be "calibrated" or adjusted to match streamflow conditions in specific systems (i.e. some of the parameters are not directly observable). This paper addresses the performance and uncertainty associated with transferring conceptual rainfall-runoff model parameters between basins within large-scale ecoregions. We use the National Weather Service's (NWS) operational hydrologic model, the SACramento Soil Moisture Accounting (SAC-SMA) model. A Multi-Step Automatic Calibration Scheme (MACS), using the Shuffle Complex Evolution (SCE), is used to optimize SAC-SMA parameters for a group of watersheds with extensive hydrologic records from the Model Parameter Estimation Experiment (MOPEX) database. We then explore "hydroclimatic" relationships between basins to facilitate regionalization of parameters for an established ecoregion in the southeastern United States. The impact of regionalized parameters is evaluated via standard model performance statistics as well as through generation of hindcasts and probabilistic verification procedures to evaluate streamflow forecast skill. Preliminary results show climatology ("climate neighbor") to be a better indicator of transferability than physical similarities or proximity ("nearest neighbor"). The mean and median of all the parameters within the ecoregion are the poorest choice for the ungauged basin. The choice of regionalized parameter set affected the skill of the ensemble streamflow hindcasts, however, all parameter sets show little skill in forecasts after five weeks (i.e. climatology is as good an indicator of future streamflows). In addition, the optimum parameter set changed seasonally, with the "nearest neighbor" showing the highest skill in the

  13. The Stay/Switch Model of Concurrent Choice

    ERIC Educational Resources Information Center

    MacDonall, James S.

    2009-01-01

    This experiment compared descriptions of concurrent choice by the stay/switch model, which says choice is a function of the reinforcers obtained for staying at and for switching from each alternative, and the generalized matching law, which says choice is a function of the total reinforcers obtained at each alternative. For the stay/switch model…

  14. A probabilistic model of intergranular stress corrosion cracking

    SciTech Connect

    Bourcier, R.J.; Jones, W.B. ); Scully, J.R. )

    1991-01-01

    We have developed a model which utilizes a probabilistic failure criterion to describe intergranular stress corrosion cracking (IGSCC). A two-dimensional array of elements representing a section of a pipe wall is analyzed, with each element in the array representing a segment of grain boundary. The failure criterion is applied repetitively to each element of the array that is exposed to the interior of the pipe (i.e. the corrosive fluid) until that element dissolves, thereby exposing the next element. A number of environmental, mechanical, and materials factors have been incorporated into the model, including: (1) the macroscopic applied stress profile, (2) the stress history, (3) the extent and grain-to- grain distribution of carbide sensitization levels, which can be applied to a subset of elements comprising a grain boundary, and (4) a data set containing IGSCC crack growth rates as function of applied stress intensity and sensitization level averaged over a large population of grains. The latter information was obtained from the literature for AISI 304 stainless steel under light water nuclear reactor primary coolant environmental conditions. The resulting crack growth simulations are presented and discussed. 14 refs., 10 figs.

  15. Applications of the International Space Station Probabilistic Risk Assessment Model

    NASA Astrophysics Data System (ADS)

    Grant, W.; Lutomski, M.

    2012-01-01

    The International Space Station (ISS) program is continuing to expand the use of Probabilistic Risk Assessments (PRAs). The use of PRAs in the ISS decision making process has proven very successful over the past 8 years. PRAs are used in the decision making process to address significant operational and design issues as well as to identify, communicate, and mitigate risks. Future PRAs are expected to have major impacts on not only the ISS, but also future NASA programs and projects. Many of these PRAs will have their foundation in the current ISS PRA model and in PRA trade studies that are being developed for the ISS Program. ISS PRAs have supported: -Development of reliability requirements for future NASA and commercial spacecraft, -Determination of inherent risk for visiting vehicles, -Evaluation of potential crew rescue scenarios, -Operational requirements and alternatives, -Planning of Extravehicular activities (EV As) and, -Evaluation of robotics operations. This paper will describe some applications of the ISS PRA model and how they impacted the final decisions that were made.

  16. Probabilistic clustering and shape modelling of white matter fibre bundles using regression mixtures.

    PubMed

    Ratnarajah, Nagulan; Simmons, Andy; Hojjatoleslami, Ali

    2011-01-01

    We present a novel approach for probabilistic clustering of white matter fibre pathways using curve-based regression mixture modelling techniques in 3D curve space. The clustering algorithm is based on a principled method for probabilistic modelling of a set of fibre trajectories as individual sequences of points generated from a finite mixture model consisting of multivariate polynomial regression model components. Unsupervised learning is carried out using maximum likelihood principles. Specifically, conditional mixture is used together with an EM algorithm to estimate cluster membership. The result of clustering is a probabilistic assignment of fibre trajectories to each cluster and an estimate of cluster parameters. A statistical shape model is calculated for each clustered fibre bundle using fitted parameters of the probabilistic clustering. We illustrate the potential of our clustering approach on synthetic and real data. PMID:21995009

  17. Nested Logit Models for Multiple-Choice Item Response Data

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Bolt, Daniel M.

    2010-01-01

    Nested logit item response models for multiple-choice data are presented. Relative to previous models, the new models are suggested to provide a better approximation to multiple-choice items where the application of a solution strategy precedes consideration of response options. In practice, the models also accommodate collapsibility across all…

  18. Probabilistic approaches to the modelling of fluvial processes

    NASA Astrophysics Data System (ADS)

    Molnar, Peter

    2013-04-01

    Fluvial systems generally exhibit sediment dynamics that are strongly stochastic. This stochasticity comes basically from three sources: (a) the variability and randomness in sediment supply due to surface properties and topography; (b) from the multitude of pathways that sediment may take on hillslopes and in channels, and the uncertainty in travel times and sediment storage along those pathways; and (c) from the stochasticity which is inherent in mobilizing sediment, either by heavy rain, landslides, debris flows, slope erosion, channel avulsions, etc. Fully deterministic models of fluvial systems, even if they are physically realistic and very complex, are likely going to be unable to capture this stochasticity and as a result will fail to reproduce long-term sediment dynamics. In this paper I will review another approach to modelling fluvial processes, which grossly simplifies the systems itself, but allows for stochasticity in sediment supply, mobilization and transport. I will demonstrate the benefits and limitations of this probabilistic approach to fluvial processes on three examples. The first example is a probabilistic sediment cascade which we developed for the Illgraben, a debris flow basin in the Rhone catchment. In this example it will be shown how the probability distribution of landslides generating sediment input into the channel system is transposed into that of sediment yield out of the basin by debris flows. The key role of transient sediment storage in the channel system, which limits the size of potential debris flows, is highlighted together with the influence of the landslide triggering mechanisms and climate stochasticity. The second example focuses on the river reach scale in the Maggia River, a braided gravel-bed stream where the exposed sediment on gravel bars is colonised by riparian vegetation in periods without floods. A simple autoregressive model with a disturbance and colonization term is used to simulate the growth and decline in

  19. The Terrestrial Investigation Model: A probabilistic risk assessment model for birds exposed to pesticides

    EPA Science Inventory

    One of the major recommendations of the National Academy of Science to the USEPA, NMFS and USFWS was to utilize probabilistic methods when assessing the risks of pesticides to federally listed endangered and threatened species. The Terrestrial Investigation Model (TIM, version 3....

  20. Dynamic Probabilistic Modeling of Environmental Emissions of Engineered Nanomaterials.

    PubMed

    Sun, Tian Yin; Bornhöft, Nikolaus A; Hungerbühler, Konrad; Nowack, Bernd

    2016-05-01

    The need for an environmental risk assessment for engineered nanomaterials (ENM) necessitates the knowledge about their environmental concentrations. Despite significant advances in analytical methods, it is still not possible to measure the concentrations of ENM in natural systems. Material flow and environmental fate models have been used to provide predicted environmental concentrations. However, almost all current models are static and consider neither the rapid development of ENM production nor the fact that many ENM are entering an in-use stock and are released with a lag phase. Here we use dynamic probabilistic material flow modeling to predict the flows of four ENM (nano-TiO2, nano-ZnO, nano-Ag and CNT) to the environment and to quantify their amounts in (temporary) sinks such as the in-use stock and ("final") environmental sinks such as soil and sediment. Caused by the increase in production, the concentrations of all ENM in all compartments are increasing. Nano-TiO2 had far higher concentrations than the other three ENM. Sediment showed in our worst-case scenario concentrations ranging from 6.7 μg/kg (CNT) to about 40 000 μg/kg (nano-TiO2). In most cases the concentrations in waste incineration residues are at the "mg/kg" level. The flows to the environment that we provide will constitute the most accurate and reliable input of masses for environmental fate models which are using process-based descriptions of the fate and behavior of ENM in natural systems and rely on accurate mass input parameters. PMID:27043743

  1. Probabilistic model-based approach for heart beat detection.

    PubMed

    Chen, Hugh; Erol, Yusuf; Shen, Eric; Russell, Stuart

    2016-09-01

    Nowadays, hospitals are ubiquitous and integral to modern society. Patients flow in and out of a veritable whirlwind of paperwork, consultations, and potential inpatient admissions, through an abstracted system that is not without flaws. One of the biggest flaws in the medical system is perhaps an unexpected one: the patient alarm system. One longitudinal study reported an 88.8% rate of false alarms, with other studies reporting numbers of similar magnitudes. These false alarm rates lead to deleterious effects that manifest in a lower standard of care across clinics. This paper discusses a model-based probabilistic inference approach to estimate physiological variables at a detection level. We design a generative model that complies with a layman's understanding of human physiology and perform approximate Bayesian inference. One primary goal of this paper is to justify a Bayesian modeling approach to increasing robustness in a physiological domain. In order to evaluate our algorithm we look at the application of heart beat detection using four datasets provided by PhysioNet, a research resource for complex physiological signals, in the form of the PhysioNet 2014 Challenge set-p1 and set-p2, the MIT-BIH Polysomnographic Database, and the MGH/MF Waveform Database. On these data sets our algorithm performs on par with the other top six submissions to the PhysioNet 2014 challenge. The overall evaluation scores in terms of sensitivity and positive predictivity values obtained were as follows: set-p1 (99.72%), set-p2 (93.51%), MIT-BIH (99.66%), and MGH/MF (95.53%). These scores are based on the averaging of gross sensitivity, gross positive predictivity, average sensitivity, and average positive predictivity. PMID:27480267

  2. Forecasting the duration of volcanic eruptions: an empirical probabilistic model

    NASA Astrophysics Data System (ADS)

    Gunn, L. S.; Blake, S.; Jones, M. C.; Rymer, H.

    2014-01-01

    The ability to forecast future volcanic eruption durations would greatly benefit emergency response planning prior to and during a volcanic crises. This paper introduces a probabilistic model to forecast the duration of future and on-going eruptions. The model fits theoretical distributions to observed duration data and relies on past eruptions being a good indicator of future activity. A dataset of historical Mt. Etna flank eruptions is presented and used to demonstrate the model. The data have been compiled through critical examination of existing literature along with careful consideration of uncertainties on reported eruption start and end dates between the years 1300 AD and 2010. Data following 1600 is considered to be reliable and free of reporting biases. The distribution of eruption duration between the years 1600 and 1669 is found to be statistically different from that following it and the forecasting model is run on two datasets of Mt. Etna flank eruption durations: 1600-2010 and 1670-2010. Each dataset is modelled using a log-logistic distribution with parameter values found by maximum likelihood estimation. Survivor function statistics are applied to the model distributions to forecast (a) the probability of an eruption exceeding a given duration, (b) the probability of an eruption that has already lasted a particular number of days exceeding a given total duration and (c) the duration with a given probability of being exceeded. Results show that excluding the 1600-1670 data has little effect on the forecasting model result, especially where short durations are involved. By assigning the terms `likely' and `unlikely' to probabilities of 66 % or more and 33 % or less, respectively, the forecasting model based on the 1600-2010 dataset indicates that a future flank eruption on Mt. Etna would be likely to exceed 20 days (± 7 days) but unlikely to exceed 86 days (± 29 days). This approach can easily be adapted for use on other highly active, well

  3. Hierarchical Diffusion Models for Two-Choice Response Times

    ERIC Educational Resources Information Center

    Vandekerckhove, Joachim; Tuerlinckx, Francis; Lee, Michael D.

    2011-01-01

    Two-choice response times are a common type of data, and much research has been devoted to the development of process models for such data. However, the practical application of these models is notoriously complicated, and flexible methods are largely nonexistent. We combine a popular model for choice response times--the Wiener diffusion…

  4. An empirical model for probabilistic decadal prediction: A global analysis

    NASA Astrophysics Data System (ADS)

    Suckling, Emma; Hawkins, Ed; Eden, Jonathan; van Oldenborgh, Geert Jan

    2016-04-01

    Empirical models, designed to predict land-based surface variables over seasons to decades ahead, provide useful benchmarks for comparison against the performance of dynamical forecast systems; they may also be employable as predictive tools for use by climate services in their own right. A new global empirical decadal prediction system is presented, based on a multiple linear regression approach designed to produce probabilistic output for comparison against dynamical models. Its performance is evaluated for surface air temperature over a set of historical hindcast experiments under a series of different prediction `modes'. The modes include a real-time setting, a scenario in which future volcanic forcings are prescribed during the hindcasts, and an approach which exploits knowledge of the forced trend. A two-tier prediction system, which uses knowledge of future sea surface temperatures in the Pacific and Atlantic Oceans, is also tested, but within a perfect knowledge framework. Each mode is designed to identify sources of predictability and uncertainty, as well as investigate different approaches to the design of decadal prediction systems for operational use. It is found that the empirical model shows skill above that of persistence hindcasts for annual means at lead times of up to ten years ahead in all of the prediction modes investigated. Small improvements in skill are found at all lead times when including future volcanic forcings in the hindcasts. It is also suggested that hindcasts which exploit full knowledge of the forced trend due to increasing greenhouse gases throughout the hindcast period can provide more robust estimates of model bias for the calibration of the empirical model in an operational setting. The two-tier system shows potential for improved real-time prediction, given the assumption that skilful predictions of large-scale modes of variability are available. The empirical model framework has been designed with enough flexibility to

  5. Integration of fuzzy analytic hierarchy process and probabilistic dynamic programming in formulating an optimal fleet management model

    NASA Astrophysics Data System (ADS)

    Teoh, Lay Eng; Khoo, Hooi Ling

    2013-09-01

    This study deals with two major aspects of airlines, i.e. supply and demand management. The aspect of supply focuses on the mathematical formulation of an optimal fleet management model to maximize operational profit of the airlines while the aspect of demand focuses on the incorporation of mode choice modeling as parts of the developed model. The proposed methodology is outlined in two-stage, i.e. Fuzzy Analytic Hierarchy Process is first adopted to capture mode choice modeling in order to quantify the probability of probable phenomena (for aircraft acquisition/leasing decision). Then, an optimization model is developed as a probabilistic dynamic programming model to determine the optimal number and types of aircraft to be acquired and/or leased in order to meet stochastic demand during the planning horizon. The findings of an illustrative case study show that the proposed methodology is viable. The results demonstrate that the incorporation of mode choice modeling could affect the operational profit and fleet management decision of the airlines at varying degrees.

  6. A Survey of Probabilistic Models for Relational Data

    SciTech Connect

    Koutsourelakis, P S

    2006-10-13

    Traditional data mining methodologies have focused on ''flat'' data i.e. a collection of identically structured entities, assumed to be independent and identically distributed. However, many real-world datasets are innately relational in that they consist of multi-modal entities and multi-relational links (where each entity- or link-type is characterized by a different set of attributes). Link structure is an important characteristic of a dataset and should not be ignored in modeling efforts, especially when statistical dependencies exist between related entities. These dependencies can in fact significantly improve the accuracy of inference and prediction results, if the relational structure is appropriately leveraged (Figure 1). The need for models that can incorporate relational structure has been accentuated by new technological developments which allow us to easily track, store, and make accessible large amounts of data. Recently, there has been a surge of interest in statistical models for dealing with richly interconnected, heterogeneous data, fueled largely by information mining of web/hypertext data, social networks, bibliographic citation data, epidemiological data and communication networks. Graphical models have a natural formalism for representing complex relational data and for predicting the underlying evolving system in a dynamic framework. The present survey provides an overview of probabilistic methods and techniques that have been developed over the last few years for dealing with relational data. Particular emphasis is paid to approaches pertinent to the research areas of pattern recognition, group discovery, entity/node classification, and anomaly detection. We start with supervised learning tasks, where two basic modeling approaches are discussed--i.e. discriminative and generative. Several discriminative techniques are reviewed and performance results are presented. Generative methods are discussed in a separate survey. A special section is

  7. Some Proposed Modifications to the 1996 California Probabilistic Hazard Model

    NASA Astrophysics Data System (ADS)

    Cao, T.; Bryant, W. A.; Rowshandel, B.; Toppozada, T.; Reichle, M. S.; Petersen, M. D.; Frankel, A. D.

    2001-12-01

    The California Department of Conservation, Division of Mines and Geology and U. S. Geological Survey are working on the revision of the 1996 California Probabilistic hazard model. Since the release of this hazard model some of the new seismological and geological studies and observations in this area have provided the basis for the revision. Important considerations of model modifications include the following: 1. using a new bilinear fault area-magnitude relation to replace the Wells and Coppersmith (1994) relation for M greater than and equal to 7.0; 2. using the Gaussian function to replace the Dirac Delta function for characteristic magnitude; 3. updating the earthquake catalog with the new M greater than and equal to 5.5 catalog from 1800 to 1999 by Toppozada et al. (2000) and the Berkeley and Caltech catalogs for 1996-2001; 4. balancing the moment release for some major A type faults; 5. adding Abrahamson and Silva attention relation with new hanging wall term; 6. considering different ratios between characteristic and Gutenberg-Richter magnitude-frequency distributions other than 50 percent and 50 percent; 7. using Monte Carlo method to sample the logic tree to produce uncertainty map of coefficient of variation (COV); 8. separating background seismicity in the vicinity of faults from other areas for different smoothing process or no smoothing at all, especially for the creeping section of the San Andreas fault and the Brawley seismic zone; 9. using near-fault variability of attenuation relations to mimic directivity; 10. modifying slip-rates for the Concord-Green Valley, Sierra Madre, and Raymond faults, adding or modifying blind thrust faults mainly in the Los Angeles Basin. These possible changes were selected with input received during several workshops that included participation of geologists and seismologists familiar with the area of concern. With the above revisions and other changes, we expect that the new model should not differ greatly from the

  8. Probabilistic modelling of sea surges in coastal urban areas

    NASA Astrophysics Data System (ADS)

    Georgiadis, Stylianos; Jomo Danielsen Sørup, Hjalte; Arnbjerg-Nielsen, Karsten; Nielsen, Bo Friis

    2016-04-01

    Urban floods are a major issue for coastal cities with severe impacts on economy, society and environment. A main cause for floods are sea surges stemming from extreme weather conditions. In the context of urban flooding, certain standards have to be met by critical infrastructures in order to protect them from floods. These standards can be so strict that no empirical data is available. For instance, protection plans for sub-surface railways against floods are established with 10,000 years return levels. Furthermore, the long technical lifetime of such infrastructures is a critical issue that should be considered, along with the associated climate change effects in this lifetime. We present a case study of Copenhagen where the metro system is being expanded at present with several stations close to the sea. The current critical sea levels for the metro have never been exceeded and Copenhagen has only been severely flooded from pluvial events in the time where measurements have been conducted. However, due to the very high return period that the metro has to be able to withstand and due to the expectations to sea-level rise due to climate change, reliable estimates of the occurrence rate and magnitude of sea surges have to be established as the current protection is expected to be insufficient at some point within the technical lifetime of the metro. The objective of this study is to probabilistically model sea level in Copenhagen as opposed to extrapolating the extreme statistics as is the practice often used. A better understanding and more realistic description of the phenomena leading to sea surges can then be given. The application of hidden Markov models to high-resolution data of sea level for different meteorological stations in and around Copenhagen is an effective tool to address uncertainty. For sea surge studies, the hidden states of the model may reflect the hydrological processes that contribute to coastal floods. Also, the states of the hidden Markov

  9. A Neurocomputational Model of Altruistic Choice and Its Implications.

    PubMed

    Hutcherson, Cendri A; Bushong, Benjamin; Rangel, Antonio

    2015-07-15

    We propose a neurocomputational model of altruistic choice and test it using behavioral and fMRI data from a task in which subjects make choices between real monetary prizes for themselves and another. We show that a multi-attribute drift-diffusion model, in which choice results from accumulation of a relative value signal that linearly weights payoffs for self and other, captures key patterns of choice, reaction time, and neural response in ventral striatum, temporoparietal junction, and ventromedial prefrontal cortex. The model generates several novel insights into the nature of altruism. It explains when and why generous choices are slower or faster than selfish choices, and why they produce greater response in TPJ and vmPFC, without invoking competition between automatic and deliberative processes or reward value for generosity. It also predicts that when one's own payoffs are valued more than others', some generous acts may reflect mistakes rather than genuinely pro-social preferences. PMID:26182424

  10. Data-directed RNA secondary structure prediction using probabilistic modeling.

    PubMed

    Deng, Fei; Ledda, Mirko; Vaziri, Sana; Aviran, Sharon

    2016-08-01

    Structure dictates the function of many RNAs, but secondary RNA structure analysis is either labor intensive and costly or relies on computational predictions that are often inaccurate. These limitations are alleviated by integration of structure probing data into prediction algorithms. However, existing algorithms are optimized for a specific type of probing data. Recently, new chemistries combined with advances in sequencing have facilitated structure probing at unprecedented scale and sensitivity. These novel technologies and anticipated wealth of data highlight a need for algorithms that readily accommodate more complex and diverse input sources. We implemented and investigated a recently outlined probabilistic framework for RNA secondary structure prediction and extended it to accommodate further refinement of structural information. This framework utilizes direct likelihood-based calculations of pseudo-energy terms per considered structural context and can readily accommodate diverse data types and complex data dependencies. We use real data in conjunction with simulations to evaluate performances of several implementations and to show that proper integration of structural contexts can lead to improvements. Our tests also reveal discrepancies between real data and simulations, which we show can be alleviated by refined modeling. We then propose statistical preprocessing approaches to standardize data interpretation and integration into such a generic framework. We further systematically quantify the information content of data subsets, demonstrating that high reactivities are major drivers of SHAPE-directed predictions and that better understanding of less informative reactivities is key to further improvements. Finally, we provide evidence for the adaptive capability of our framework using mock probe simulations. PMID:27251549

  11. Building a Probabilistic Denitrification Model for an Oregon Salt Marsh

    NASA Astrophysics Data System (ADS)

    Moon, J. B.; Stecher, H. A.; DeWitt, T.; Nahlik, A.; Regutti, R.; Michael, L.; Fennessy, M. S.; Brown, L.; Mckane, R.; Naithani, K. J.

    2015-12-01

    Despite abundant work starting in the 1950s on the drivers of denitrification (DeN), mechanistic complexity and methodological challenges of direct DeN measurements have resulted in a lack of reliable rate estimates across landscapes, and a lack of operationally valid, robust models. Measuring and modeling DeN are particularly challenging in tidal systems, which play a vital role in buffering adjacent coastal waters from nitrogen inputs. These systems are hydrologically and biogeochemically complex, varying on fine temporal and spatial scales. We assessed the spatial and temporal variability of soil nitrate (NO3-) levels and O2 availability, two primary drivers of DeN, in surface soils of Winant salt marsh located in Yaquina estuary, OR during the summers of 2013 and 2014. We found low temporal variability in soil NO3- concentrations across years, tide series, and tide cycles, but high spatial variability linked to elevation gradients (i.e., habitat types); spatial variability within the high marsh habitat (0 - 68 μg N g-1 dry soil) was correlated with distance to major tide creek channels and connectivity to upslope N-fixing red alder. Soil O2 measurements collected at 5 cm below ground across three locations on two spring tide series showed that O2 drawdown rates were also spatially variable. Depending on the marsh location, O2 draw down ranged from sub-optimal for DeN (> 80 % O2 saturation) across an entire tide series (i.e., across days) to optimum (i.e., ~ 0 % O2 saturation) within one overtopping tide event (i.e., within hours). We are using these results, along with empirical relationships created between DeN and soil NO3- concentrations for Winant to improve on a pre-existing tidal DeN model. We will develop the first version of a fully probabilistic hierarchical Bayesian tidal DeN model to quantify parameter and prediction uncertainties, which are as important as determining mean predictions in order to distinguish measurable differences across the marsh.

  12. Estimation of an Occupational Choice Model when Occupations Are Misclassified

    ERIC Educational Resources Information Center

    Sullivan, Paul

    2009-01-01

    This paper develops an empirical occupational choice model that corrects for misclassification in occupational choices and measurement error in occupation-specific work experience. The model is used to estimate the extent of measurement error in occupation data and quantify the bias that results from ignoring measurement error in occupation codes…

  13. Children's Conceptions of Career Choice and Attainment: Model Development

    ERIC Educational Resources Information Center

    Howard, Kimberly A. S.; Walsh, Mary E.

    2011-01-01

    This article describes a model of children's conceptions of two key career development processes: career choice and career attainment. The model of children's understanding of career choice and attainment was constructed with developmental research and theory into children's understanding of allied phenomena such as their understanding of illness,…

  14. Opponent actor learning (OpAL): modeling interactive effects of striatal dopamine on reinforcement learning and choice incentive.

    PubMed

    Collins, Anne G E; Frank, Michael J

    2014-07-01

    The striatal dopaminergic system has been implicated in reinforcement learning (RL), motor performance, and incentive motivation. Various computational models have been proposed to account for each of these effects individually, but a formal analysis of their interactions is lacking. Here we present a novel algorithmic model expanding the classical actor-critic architecture to include fundamental interactive properties of neural circuit models, incorporating both incentive and learning effects into a single theoretical framework. The standard actor is replaced by a dual opponent actor system representing distinct striatal populations, which come to differentially specialize in discriminating positive and negative action values. Dopamine modulates the degree to which each actor component contributes to both learning and choice discriminations. In contrast to standard frameworks, this model simultaneously captures documented effects of dopamine on both learning and choice incentive-and their interactions-across a variety of studies, including probabilistic RL, effort-based choice, and motor skill learning. PMID:25090423

  15. Hybrid discrete choice models: Gained insights versus increasing effort.

    PubMed

    Mariel, Petr; Meyerhoff, Jürgen

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. PMID:27310534

  16. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas; Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  17. A fully probabilistic approach to extreme rainfall modeling

    NASA Astrophysics Data System (ADS)

    Coles, Stuart; Pericchi, Luis Raúl; Sisson, Scott

    2003-03-01

    It is an embarrassingly frequent experience that statistical practice fails to foresee historical disasters. It is all too easy to blame global trends or some sort of external intervention, but in this article we argue that statistical methods that do not take comprehensive account of the uncertainties involved in both model and predictions, are bound to produce an over-optimistic appraisal of future extremes that is often contradicted by observed hydrological events. Based on the annual and daily rainfall data on the central coast of Venezuela, different modeling strategies and inference approaches show that the 1999 rainfall which caused the worst environmentally related tragedy in Venezuelan history was extreme, but not implausible given the historical evidence. We follow in turn a classical likelihood and Bayesian approach, arguing that the latter is the most natural approach for taking into account all uncertainties. In each case we emphasize the importance of making inference on predicted levels of the process rather than model parameters. Our most detailed model comprises of seasons with unknown starting points and durations for the extremes of daily rainfall whose behavior is described using a standard threshold model. Based on a Bayesian analysis of this model, so that both prediction uncertainty and process heterogeneity are properly modeled, we find that the 1999 event has a sizeable probability which implies that such an occurrence within a reasonably short time horizon could have been anticipated. Finally, since accumulation of extreme rainfall over several days is an additional difficulty—and indeed, the catastrophe of 1999 was exaggerated by heavy rainfall on successive days—we examine the effect of timescale on our broad conclusions, finding results to be broadly similar across different choices.

  18. Probabilistic modeling of flood characterizations with parametric and minimum information pair-copula model

    NASA Astrophysics Data System (ADS)

    Daneshkhah, Alireza; Remesan, Renji; Chatrabgoun, Omid; Holman, Ian P.

    2016-09-01

    This paper highlights the usefulness of the minimum information and parametric pair-copula construction (PCC) to model the joint distribution of flood event properties. Both of these models outperform other standard multivariate copula in modeling multivariate flood data that exhibiting complex patterns of dependence, particularly in the tails. In particular, the minimum information pair-copula model shows greater flexibility and produces better approximation of the joint probability density and corresponding measures have capability for effective hazard assessments. The study demonstrates that any multivariate density can be approximated to any degree of desired precision using minimum information pair-copula model and can be practically used for probabilistic flood hazard assessment.

  19. Educational Choice: A Privately Funded Model. School Choice with a Bite.

    ERIC Educational Resources Information Center

    Aguirre, Robert B.; Steiger, Fritz S., Ed.

    A privately funded educational choice model is presented in this document, which is based on the experiences of four privately funded programs in Indianapolis (Indiana), San Antonio (Texas), Milwaukee (Wisconsin), and Atlanta (Georgia). The model is designed to help interested persons or organizations establish a privately funded educational…

  20. Characterizing the International Migration Barriers with a Probabilistic Multilateral Migration Model

    PubMed Central

    Li, Xiaomeng; Xu, Hongzhong; Chen, Jiawei; Chen, Qinghua; Zhang, Jiang; Di, Zengru

    2016-01-01

    Human migration is responsible for forming modern civilization and has had an important influence on the development of various countries. There are many issues worth researching, and “the reason to move” is the most basic one. The concept of migration cost in the classical self-selection theory, which was introduced by Roy and Borjas, is useful. However, migration cost cannot address global migration because of the limitations of deterministic and bilateral choice. Following the idea of migration cost, this paper developed a new probabilistic multilateral migration model by introducing the Boltzmann factor from statistical physics. After characterizing the underlying mechanism or driving force of human mobility, we reveal some interesting facts that have provided a deeper understanding of international migration, such as the negative correlation between migration costs for emigrants and immigrants and a global classification with clear regional and economic characteristics, based on clustering of migration cost vectors. In addition, we deconstruct the migration barriers using regression analysis and find that the influencing factors are complicated but can be partly (12.5%) described by several macro indexes, such as the GDP growth of the destination country, the GNI per capita and the HDI of both the source and destination countries. PMID:27597319

  1. Characterizing the International Migration Barriers with a Probabilistic Multilateral Migration Model.

    PubMed

    Li, Xiaomeng; Xu, Hongzhong; Chen, Jiawei; Chen, Qinghua; Zhang, Jiang; Di, Zengru

    2016-01-01

    Human migration is responsible for forming modern civilization and has had an important influence on the development of various countries. There are many issues worth researching, and "the reason to move" is the most basic one. The concept of migration cost in the classical self-selection theory, which was introduced by Roy and Borjas, is useful. However, migration cost cannot address global migration because of the limitations of deterministic and bilateral choice. Following the idea of migration cost, this paper developed a new probabilistic multilateral migration model by introducing the Boltzmann factor from statistical physics. After characterizing the underlying mechanism or driving force of human mobility, we reveal some interesting facts that have provided a deeper understanding of international migration, such as the negative correlation between migration costs for emigrants and immigrants and a global classification with clear regional and economic characteristics, based on clustering of migration cost vectors. In addition, we deconstruct the migration barriers using regression analysis and find that the influencing factors are complicated but can be partly (12.5%) described by several macro indexes, such as the GDP growth of the destination country, the GNI per capita and the HDI of both the source and destination countries. PMID:27597319

  2. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    SciTech Connect

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.; Poore III, Willis P.; Muhlheim, Michael David

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  3. Multidimensional analysis and probabilistic model of volcanic and seismic activities

    NASA Astrophysics Data System (ADS)

    Fedorov, V.

    2009-04-01

    .I. Gushchenko, 1979) and seismological (database of USGS/NEIC Significant Worldwide Earthquakes, 2150 B.C.- 1994 A.D.) information which displays dynamics of endogenic relief-forming processes over a period of 1900 to 1994. In the course of the analysis, a substitution of calendar variable by a corresponding astronomical one has been performed and the epoch superposition method was applied. In essence, the method consists in that the massifs of information on volcanic eruptions (over a period of 1900 to 1977) and seismic events (1900-1994) are differentiated with respect to value of astronomical parameters which correspond to the calendar dates of the known eruptions and earthquakes, regardless of the calendar year. The obtained spectra of volcanic eruptions and violent earthquake distribution in the fields of the Earth orbital movement parameters were used as a basis for calculation of frequency spectra and diurnal probability of volcanic and seismic activity. The objective of the proposed investigations is a probabilistic model development of the volcanic and seismic events, as well as GIS designing for monitoring and forecast of volcanic and seismic activities. In accordance with the stated objective, three probability parameters have been found in the course of preliminary studies; they form the basis for GIS-monitoring and forecast development. 1. A multidimensional analysis of volcanic eruption and earthquakes (of magnitude 7) have been performed in terms of the Earth orbital movement. Probability characteristics of volcanism and seismicity have been defined for the Earth as a whole. Time intervals have been identified with a diurnal probability twice as great as the mean value. Diurnal probability of volcanic and seismic events has been calculated up to 2020. 2. A regularity is found in duration of dormant (repose) periods has been established. A relationship has been found between the distribution of the repose period probability density and duration of the period. 3

  4. Probabilistic finite element analysis of a craniofacial finite element model.

    PubMed

    Berthaume, Michael A; Dechow, Paul C; Iriarte-Diaz, Jose; Ross, Callum F; Strait, David S; Wang, Qian; Grosse, Ian R

    2012-05-01

    We employed a probabilistic finite element analysis (FEA) method to determine how variability in material property values affects stress and strain values in a finite model of a Macaca fascicularis cranium. The material behavior of cortical bone varied in three ways: isotropic homogeneous, isotropic non-homogeneous, and orthotropic non-homogeneous. The material behavior of the trabecular bone and teeth was always treated as isotropic and homogeneous. All material property values for the cranium were randomized with a Gaussian distribution with either coefficients of variation (CVs) of 0.2 or with CVs calculated from empirical data. Latin hypercube sampling was used to determine the values of the material properties used in the finite element models. In total, four hundred and twenty six separate deterministic FE simulations were executed. We tested four hypotheses in this study: (1) uncertainty in material property values will have an insignificant effect on high stresses and a significant effect on high strains for homogeneous isotropic models; (2) the effect of variability in material property values on the stress state will increase as non-homogeneity and anisotropy increase; (3) variation in the in vivo shear strain values reported by Strait et al. (2005) and Ross et al. (2011) is not only due to variations in muscle forces and cranial morphology, but also due to variation in material property values; (4) the assumption of a uniform coefficient of variation for the material property values will result in the same trend in how moderate-to-high stresses and moderate-to-high strains vary with respect to the degree of non-homogeneity and anisotropy as the trend found when the coefficients of variation for material property values are calculated from empirical data. Our results supported the first three hypotheses and falsified the fourth. When material properties were varied with a constant CV, as non-homogeneity and anisotropy increased the level of variability in

  5. Reasoning in Reference Games: Individual- vs. Population-Level Probabilistic Modeling.

    PubMed

    Franke, Michael; Degen, Judith

    2016-01-01

    Recent advances in probabilistic pragmatics have achieved considerable success in modeling speakers' and listeners' pragmatic reasoning as probabilistic inference. However, these models are usually applied to population-level data, and so implicitly suggest a homogeneous population without individual differences. Here we investigate potential individual differences in Theory-of-Mind related depth of pragmatic reasoning in so-called reference games that require drawing ad hoc Quantity implicatures of varying complexity. We show by Bayesian model comparison that a model that assumes a heterogenous population is a better predictor of our data, especially for comprehension. We discuss the implications for the treatment of individual differences in probabilistic models of language use. PMID:27149675

  6. Reasoning in Reference Games: Individual- vs. Population-Level Probabilistic Modeling

    PubMed Central

    Franke, Michael; Degen, Judith

    2016-01-01

    Recent advances in probabilistic pragmatics have achieved considerable success in modeling speakers’ and listeners’ pragmatic reasoning as probabilistic inference. However, these models are usually applied to population-level data, and so implicitly suggest a homogeneous population without individual differences. Here we investigate potential individual differences in Theory-of-Mind related depth of pragmatic reasoning in so-called reference games that require drawing ad hoc Quantity implicatures of varying complexity. We show by Bayesian model comparison that a model that assumes a heterogenous population is a better predictor of our data, especially for comprehension. We discuss the implications for the treatment of individual differences in probabilistic models of language use. PMID:27149675

  7. PEER REVIEW FOR THE CONSUMER VEHICLE CHOICE MODEL

    EPA Science Inventory

    The U.S. Environmental Protection Agency’s (EPA) Office of Transportation and Air Quality (OTAQ) has recently sponsored the development of a Consumer Vehicle Choice Model (CVCM) by the Oak Ridge National Laboratory (ORNL). The specification by OTAQ to ORNL for consumer choice mod...

  8. Probabilistic model for pressure vessel reliability incorporating fracture mechanics and nondestructive examination

    SciTech Connect

    Tow, D.M.; Reuter, W.G.

    1998-03-01

    A probabilistic model has been developed for predicting the reliability of structures based on fracture mechanics and the results of nondestructive examination (NDE). The distinctive feature of this model is the way in which inspection results and the probability of detection (POD) curve are used to calculate a probability density function (PDF) for the number of flaws and the distribution of those flaws among the various size ranges. In combination with a probabilistic fracture mechanics model, this density function is used to estimate the probability of failure (POF) of a structure in which flaws have been detected by NDE. The model is useful for parametric studies of inspection techniques and material characteristics.

  9. The probabilistic seismic loss model as a tool for portfolio management: the case of Maghreb.

    NASA Astrophysics Data System (ADS)

    Pousse, Guillaume; Lorenzo, Francisco; Stejskal, Vladimir

    2010-05-01

    Although property insurance market in Maghreb countries does not systematically purchase an earthquake cover, Impact Forecasting is developing a new loss model for the calculation of probabilistic seismic risk. A probabilistic methodology using Monte Carlo simulation was applied to generate the hazard component of the model. Then, a set of damage functions is used to convert the modelled ground motion severity into monetary losses. We aim to highlight risk assessment challenges, especially in countries where reliable data are difficult to obtain. The loss model estimates the risk and allows discussing further risk transfer strategies.

  10. Probabilistic Fatigue Damage Prognosis Using a Surrogate Model Trained Via 3D Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo

    2015-01-01

    Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.

  11. Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

    PubMed Central

    McClelland, James L.

    2013-01-01

    This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868

  12. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use. PMID:25773127

  13. Sexual selection under parental choice: a revision to the model.

    PubMed

    Apostolou, Menelaos

    2014-06-01

    Across human cultures, parents exercise considerable influence over their children's mate choices. The model of parental choice provides a good account of these patterns, but its prediction that male parents exercise more control than female ones is not well founded in evolutionary theory. To address this shortcoming, the present article proposes a revision to the model. In particular, parental uncertainty, residual reproductive value, reproductive variance, asymmetry in the control of resources, physical strength, and access to weaponry make control over mating more profitable for male parents than female ones; in turn, this produces an asymmetrical incentive for controlling mate choice. Several implications of this formulation are also explored. PMID:24474549

  14. Conditional Reasoning in Context: A Dual-Source Model of Probabilistic Inference

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph; Beller, Sieghard; Hutter, Mandy

    2010-01-01

    A dual-source model of probabilistic conditional inference is proposed. According to the model, inferences are based on 2 sources of evidence: logical form and prior knowledge. Logical form is a decontextualized source of evidence, whereas prior knowledge is activated by the contents of the conditional rule. In Experiments 1 to 3, manipulations of…

  15. A PROBABILISTIC POPULATION EXPOSURE MODEL FOR PM10 AND PM 2.5

    EPA Science Inventory

    A first generation probabilistic population exposure model for Particulate Matter (PM), specifically for predicting PM10, and PM2.5, exposures of an urban, population has been developed. This model is intended to be used to predict exposure (magnitude, frequency, and duration) ...

  16. Model initialisation, data assimilation and probabilistic flood forecasting for distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Cole, S. J.; Robson, A. J.; Bell, V. A.; Moore, R. J.

    2009-04-01

    The hydrological forecasting component of the Natural Environment Research Council's FREE (Flood Risk from Extreme Events) project "Exploitation of new data sources, data assimilation and ensemble techniques for storm and flood forecasting" addresses the initialisation, data assimilation and uncertainty of hydrological flood models utilising advances in rainfall estimation and forecasting. Progress will be reported on the development and assessment of simple model-initialisation and state-correction methods for a distributed grid-based hydrological model, the G2G Model. The potential of the G2G Model for area-wide flood forecasting is demonstrated through a nationwide application across England and Wales. Probabilistic flood forecasting in spatial form is illustrated through the use of high-resolution NWP rainfalls, and pseudo-ensemble forms of these, as input to the G2G Model. The G2G Model is configured over a large area of South West England and the Boscastle storm of 16 August 2004 is used as a convective case study. Visualisation of probabilistic flood forecasts is achieved through risk maps of flood threshold exceedence that indicate the space-time evolution of flood risk during the event.

  17. Models of Affective Decision Making: How Do Feelings Predict Choice?

    PubMed

    Charpentier, Caroline J; De Neve, Jan-Emmanuel; Li, Xinyi; Roiser, Jonathan P; Sharot, Tali

    2016-06-01

    Intuitively, how you feel about potential outcomes will determine your decisions. Indeed, an implicit assumption in one of the most influential theories in psychology, prospect theory, is that feelings govern choice. Surprisingly, however, very little is known about the rules by which feelings are transformed into decisions. Here, we specified a computational model that used feelings to predict choices. We found that this model predicted choice better than existing value-based models, showing a unique contribution of feelings to decisions, over and above value. Similar to the value function in prospect theory, our feeling function showed diminished sensitivity to outcomes as value increased. However, loss aversion in choice was explained by an asymmetry in how feelings about losses and gains were weighted when making a decision, not by an asymmetry in the feelings themselves. The results provide new insights into how feelings are utilized to reach a decision. PMID:27071751

  18. Monthly water balance modeling: Probabilistic, possibilistic and hybrid methods for model combination and ensemble simulation

    NASA Astrophysics Data System (ADS)

    Nasseri, M.; Zahraie, B.; Ajami, N. K.; Solomatine, D. P.

    2014-04-01

    Multi-model (ensemble, or committee) techniques have shown to be an effective way to improve hydrological prediction performance and provide uncertainty information. This paper presents two novel multi-model ensemble techniques, one probabilistic, Modified Bootstrap Ensemble Model (MBEM), and one possibilistic, FUzzy C-means Ensemble based on data Pattern (FUCEP). The paper also explores utilization of the Ordinary Kriging (OK) method as a multi-model combination scheme for hydrological simulation/prediction. These techniques are compared against Bayesian Model Averaging (BMA) and Weighted Average (WA) methods to demonstrate their effectiveness. The mentioned techniques are applied to the three monthly water balance models used to generate stream flow simulations for two mountainous basins in the South-West of Iran. For both basins, the results demonstrate that MBEM and FUCEP generate more skillful and reliable probabilistic predictions, outperforming all the other techniques. We have also found that OK did not demonstrate any improved skill as a simple combination method over WA scheme for neither of the basins.

  19. A Garbage Can Model of Organizational Choice

    ERIC Educational Resources Information Center

    Cohen, Michael D.; And Others

    1972-01-01

    A model of decision making in an organized anarchy, i.e., a very loosely structured organization. Possible application of this computer simulation model illustrated by comparison with university decision making. (RA)

  20. Semiparametric Thurstonian Models for Recurrent Choices: A Bayesian Analysis

    ERIC Educational Resources Information Center

    Ansari, Asim; Iyengar, Raghuram

    2006-01-01

    We develop semiparametric Bayesian Thurstonian models for analyzing repeated choice decisions involving multinomial, multivariate binary or multivariate ordinal data. Our modeling framework has multiple components that together yield considerable flexibility in modeling preference utilities, cross-sectional heterogeneity and parameter-driven…

  1. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  2. Developing a Model of Occupational Choice.

    ERIC Educational Resources Information Center

    Egner, Joan Roos; And Others

    Review of the literature in counseling, sociology, psychology, and organizational behavior failed to yield a model satisfactory for a comprehensive research framework investigating why people choose different occupations. Rational and irrational occupational decision making models were unsatisfactory in capturing the many dimensions of the…

  3. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  4. Psychological Plausibility of the Theory of Probabilistic Mental Models and the Fast and Frugal Heuristics

    ERIC Educational Resources Information Center

    Dougherty, Michael R.; Franco-Watkins, Ana M.; Thomas, Rick

    2008-01-01

    The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbolting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These…

  5. From cyclone tracks to the costs of European winter storms: A probabilistic loss assessment model

    NASA Astrophysics Data System (ADS)

    Renggli, Dominik; Corti, Thierry; Reese, Stefan; Wueest, Marc; Viktor, Elisabeth; Zimmerli, Peter

    2014-05-01

    The quantitative assessment of the potential losses of European winter storms is essential for the economic viability of a global reinsurance company. For this purpose, reinsurance companies generally use probabilistic loss assessment models. This work presents an innovative approach to develop physically meaningful probabilistic events for Swiss Re's new European winter storm loss model. The meteorological hazard component of the new model is based on cyclone and windstorm tracks identified in the 20th Century Reanalysis data. The knowledge of the evolution of winter storms both in time and space allows the physically meaningful perturbation of properties of historical events (e.g. track, intensity). The perturbation includes a random element but also takes the local climatology and the evolution of the historical event into account. The low-resolution wind footprints taken from 20th Century Reanalysis are processed by a statistical-dynamical downscaling to generate high-resolution footprints of the historical and probabilistic winter storm events. Downscaling transfer functions are generated using ENSEMBLES regional climate model data. The result is a set of reliable probabilistic events representing thousands of years. The event set is then combined with country- and risk-specific vulnerability functions and detailed market- or client-specific exposure information to compute (re-)insurance risk premiums.

  6. THE MAXIMUM LIKELIHOOD APPROACH TO PROBABILISTIC MODELING OF AIR QUALITY DATA

    EPA Science Inventory

    Software using maximum likelihood estimation to fit six probabilistic models is discussed. The software is designed as a tool for the air pollution researcher to determine what assumptions are valid in the statistical analysis of air pollution data for the purpose of standard set...

  7. Street Choice Logit Model for Visitors in Shopping Districts

    PubMed Central

    Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya

    2014-01-01

    In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive). PMID:25379274

  8. Probabilistic earthquake location and 3-D velocity models in routine earthquake location

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Husen, S.

    2003-12-01

    Earthquake monitoring agencies, such as local networks or CTBTO, are faced with the dilemma of providing routine earthquake locations in near real-time with high precision and meaningful uncertainty information. Traditionally, routine earthquake locations are obtained from linearized inversion using layered seismic velocity models. This approach is fast and simple. However, uncertainties derived from a linear approximation to a set of non-linear equations can be imprecise, unreliable, or even misleading. In addition, 1-D velocity models are a poor approximation to real Earth structure in tectonically complex regions. In this paper, we discuss the routine location of earthquakes in near real-time with high precision using non-linear, probabilistic location methods and 3-D velocity models. The combination of non-linear, global search algorithms with probabilistic earthquake location provides a fast and reliable tool for earthquake location that can be used with any kind of velocity model. The probabilistic solution to the earthquake location includes a complete description of location uncertainties, which may be irregular and multimodal. We present applications of this approach to determine seismicity in Switzerland and in Yellowstone National Park, WY. Comparing our earthquake locations to earthquake locations obtained using linearized inversion and 1-D velocity models clearly demonstrates the advantages of probabilistic earthquake location and 3-D velocity models. For example, the more complete and reliable uncertainty information of non-linear, probabilistic earthquake location greatly facilitates the identification of poorly constrained hypocenters. Such events are often not identified in linearized earthquake location, since the location uncertainties are determined with a simplified, localized and approximate Gaussian statistic.

  9. Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.

    PubMed

    Marino, Dale J; Starr, Thomas B

    2007-12-01

    A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case

  10. A Ballistic Model of Choice Response Time

    ERIC Educational Resources Information Center

    Brown, Scott; Heathcote, Andrew

    2005-01-01

    Almost all models of response time (RT) use a stochastic accumulation process. To account for the benchmark RT phenomena, researchers have found it necessary to include between-trial variability in the starting point and/or the rate of accumulation, both in linear (R. Ratcliff & J. N. Rouder, 1998) and nonlinear (M. Usher & J. L. McClelland, 2001)…

  11. The Influence of Role Models on Women's Career Choices

    ERIC Educational Resources Information Center

    Quimby, Julie L.; DeSantis, Angela M.

    2006-01-01

    This study of 368 female undergraduates examined self-efficacy and role model influence as predictors of career choice across J. L. Holland's (1997) 6 RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, Conventional) types. Findings showed that levels of self-efficacy and role model influence differed across Holland types. Multiple…

  12. Scholastic Effort: An Empirical Test of Student Choice Models.

    ERIC Educational Resources Information Center

    Prince, Raymond; And Others

    1981-01-01

    This article presents a report on student effort in economics courses based on time and the efficiency of its use. Information is presented on explanatory variables in each of the several learning models tested, theoretical basis for recent empirical student learning, definitions of student input, and the student-choice model proposed by Richard…

  13. Loss Aversion and Inhibition in Dynamical Models of Multialternative Choice

    ERIC Educational Resources Information Center

    Usher, Marius; McClelland, James L.

    2004-01-01

    The roles of loss aversion and inhibition among alternatives are examined in models of the similarity, compromise, and attraction effects that arise in choices among 3 alternatives differing on 2 attributes. R. M. Roe, J. R. Busemeyer, and J. T. Townsend (2001) have proposed a linear model in which effects previously attributed to loss aversion…

  14. Politics, Organizations, and Choice: Applications of an Equilibrium Model

    ERIC Educational Resources Information Center

    Roos, Leslie L., Jr.

    1972-01-01

    An economic model of consumer choice is used to link the separate theories that have dealt with comparative politics, job satisfaction, and organizational mobility. The model is used to structure data taken from studies of Turkish and French elites on environmental change, organizational mobility, and satisfaction. (Author/DN)

  15. A Conceptual Model of Leisure-Time Choice Behavior.

    ERIC Educational Resources Information Center

    Bergier, Michel J.

    1981-01-01

    Methods of studying the gap between predisposition and actual behavior of consumers of spectator sports is discussed. A model is drawn from the areas of behavioral sciences, consumer behavior, and leisure research. The model is constructed around the premise that choice is primarily a function of personal, product, and environmental factors. (JN)

  16. Interpretable Probabilistic Latent Variable Models for Automatic Annotation of Clinical Text

    PubMed Central

    Kotov, Alexander; Hasan, Mehedi; Carcone, April; Dong, Ming; Naar-King, Sylvie; BroganHartlieb, Kathryn

    2015-01-01

    We propose Latent Class Allocation (LCA) and Discriminative Labeled Latent Dirichlet Allocation (DL-LDA), two novel interpretable probabilistic latent variable models for automatic annotation of clinical text. Both models separate the terms that are highly characteristic of textual fragments annotated with a given set of labels from other non-discriminative terms, but rely on generative processes with different structure of latent variables. LCA directly learns class-specific multinomials, while DL-LDA breaks them down into topics (clusters of semantically related words). Extensive experimental evaluation indicates that the proposed models outperform Naïve Bayes, a standard probabilistic classifier, and Labeled LDA, a state-of-the-art topic model for labeled corpora, on the task of automatic annotation of transcripts of motivational interviews, while the output of the proposed models can be easily interpreted by clinical practitioners. PMID:26958214

  17. Lack of confidence in approximate Bayesian computation model choice.

    PubMed

    Robert, Christian P; Cornuet, Jean-Marie; Marin, Jean-Michel; Pillai, Natesh S

    2011-09-13

    Approximate Bayesian computation (ABC) have become an essential tool for the analysis of complex stochastic models. Grelaud et al. [(2009) Bayesian Anal 3:427-442] advocated the use of ABC for model choice in the specific case of Gibbs random fields, relying on an intermodel sufficiency property to show that the approximation was legitimate. We implemented ABC model choice in a wide range of phylogenetic models in the Do It Yourself-ABC (DIY-ABC) software [Cornuet et al. (2008) Bioinformatics 24:2713-2719]. We now present arguments as to why the theoretical arguments for ABC model choice are missing, because the algorithm involves an unknown loss of information induced by the use of insufficient summary statistics. The approximation error of the posterior probabilities of the models under comparison may thus be unrelated with the computational effort spent in running an ABC algorithm. We then conclude that additional empirical verifications of the performances of the ABC procedure as those available in DIY-ABC are necessary to conduct model choice. PMID:21876135

  18. A non-parametric probabilistic model for soil-structure interaction

    NASA Astrophysics Data System (ADS)

    Laudarin, F.; Desceliers, C.; Bonnet, G.; Argoul, P.

    2013-07-01

    The paper investigates the effect of soil-structure interaction on the dynamic response of structures. A non-parametric probabilistic formulation for the modelling of an uncertain soil impedance is used to account for the usual lack of information on soil properties. Such a probabilistic model introduces the physical coupling stemming from the soil heterogeneity around the foundation. Considering this effect, even a symmetrical building displays a torsional motion when submitted to earthquake loading. The study focuses on a multi-story building modeled by using equivalent Timoshenko beam models which have different mass distributions. The probability density functions of the maximal internal forces and moments in a given building are estimated by Monte Carlo simulations. Some results on the stochastic modal analysis of the structure are also given.

  19. Choice as a Global Language in Local Practice: A Mixed Model of School Choice in Taiwan

    ERIC Educational Resources Information Center

    Mao, Chin-Ju

    2015-01-01

    This paper uses school choice policy as an example to demonstrate how local actors adopt, mediate, translate, and reformulate "choice" as neo-liberal rhetoric informing education reform. Complex processes exist between global policy about school choice and the local practice of school choice. Based on the theoretical sensibility of…

  20. A note on probabilistic models over strings: the linear algebra approach.

    PubMed

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems. PMID:24135792

  1. Probabilistic models for assessment of extreme temperatures and relative humidity in Lithuania

    NASA Astrophysics Data System (ADS)

    Alzbutas, Robertas; Šeputytė, Ilona

    2015-04-01

    Extreme temperatures are fairly common natural phenomenon in Lithuania. They have mainly negative effects both on the environment and humans. Thus there are important to perform probabilistic and statistical analyzes of possibly extreme temperature values and their time-dependant changes. This is especially important in areas where technical objects (sensitive to the extreme temperatures) are foreseen to be constructed. In order to estimate the frequencies and consequences of possible extreme temperatures, the probabilistic analysis of the event occurrence and its uncertainty has been performed: statistical data have been collected and analyzed. The probabilistic analysis of extreme temperatures in Lithuanian territory is based on historical data taken from Lithuanian Hydrometeorology Service, Dūkštas Meteorological Station, Lithuanian Energy Institute and Ignalina NNP Environmental Protection Department of Environmental Monitoring Service. The main objective of performed work was the probabilistic assessment of occurrence and impact of extreme temperature and relative humidity occurring in whole Lithuania and specifically in Dūkštas region where Ignalina Nuclear Power Plant is closed for decommissioning. In addition, the other purpose of this work was to analyze the changes of extreme temperatures. The probabilistic analysis of extreme temperatures increase in Lithuanian territory was based on more than 50 years historical data. The probabilistic assessment was focused on the application and comparison of Gumbel, Weibull and Generalized Value (GEV) distributions, enabling to select a distribution, which has the best fit for data of extreme temperatures. In order to assess the likelihood of extreme temperatures different probabilistic models were applied to evaluate the probability of exeedance of different extreme temperatures. According to the statistics and the relationship between return period and probabilities of temperatures the return period for 30

  2. Probabilistic material degradation model for aerospace materials subjected to high temperature, mechanical and thermal fatigue, and creep

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1992-01-01

    A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.

  3. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  4. PubMed related articles: a probabilistic topic-based model for content similarity

    PubMed Central

    Lin, Jimmy; Wilbur, W John

    2007-01-01

    Background We present a probabilistic topic-based model for content similarity called pmra that underlies the related article search feature in PubMed. Whether or not a document is about a particular topic is computed from term frequencies, modeled as Poisson distributions. Unlike previous probabilistic retrieval models, we do not attempt to estimate relevance–but rather our focus is "relatedness", the probability that a user would want to examine a particular document given known interest in another. We also describe a novel technique for estimating parameters that does not require human relevance judgments; instead, the process is based on the existence of MeSH ® in MEDLINE ®. Results The pmra retrieval model was compared against bm25, a competitive probabilistic model that shares theoretical similarities. Experiments using the test collection from the TREC 2005 genomics track shows a small but statistically significant improvement of pmra over bm25 in terms of precision. Conclusion Our experiments suggest that the pmra model provides an effective ranking algorithm for related article search. PMID:17971238

  5. Probabilistic modeling of the evolution of gene synteny within reconciled phylogenies

    PubMed Central

    2015-01-01

    Background Most models of genome evolution concern either genetic sequences, gene content or gene order. They sometimes integrate two of the three levels, but rarely the three of them. Probabilistic models of gene order evolution usually have to assume constant gene content or adopt a presence/absence coding of gene neighborhoods which is blind to complex events modifying gene content. Results We propose a probabilistic evolutionary model for gene neighborhoods, allowing genes to be inserted, duplicated or lost. It uses reconciled phylogenies, which integrate sequence and gene content evolution. We are then able to optimize parameters such as phylogeny branch lengths, or probabilistic laws depicting the diversity of susceptibility of syntenic regions to rearrangements. We reconstruct a structure for ancestral genomes by optimizing a likelihood, keeping track of all evolutionary events at the level of gene content and gene synteny. Ancestral syntenies are associated with a probability of presence. We implemented the model with the restriction that at most one gene duplication separates two gene speciations in reconciled gene trees. We reconstruct ancestral syntenies on a set of 12 drosophila genomes, and compare the evolutionary rates along the branches and along the sites. We compare with a parsimony method and find a significant number of results not supported by the posterior probability. The model is implemented in the Bio++ library. It thus benefits from and enriches the classical models and methods for molecular evolution. PMID:26452018

  6. Probabilistic modelling of human exposure to intense sweeteners in Italian teenagers: validation and sensitivity analysis of a probabilistic model including indicators of market share and brand loyalty.

    PubMed

    Arcella, D; Soggiu, M E; Leclercq, C

    2003-10-01

    For the assessment of exposure to food-borne chemicals, the most commonly used methods in the European Union follow a deterministic approach based on conservative assumptions. Over the past few years, to get a more realistic view of exposure to food chemicals, risk managers are getting more interested in the probabilistic approach. Within the EU-funded 'Monte Carlo' project, a stochastic model of exposure to chemical substances from the diet and a computer software program were developed. The aim of this paper was to validate the model with respect to the intake of saccharin from table-top sweeteners and cyclamate from soft drinks by Italian teenagers with the use of the software and to evaluate the impact of the inclusion/exclusion of indicators on market share and brand loyalty through a sensitivity analysis. Data on food consumption and the concentration of sweeteners were collected. A food frequency questionnaire aimed at identifying females who were high consumers of sugar-free soft drinks and/or of table top sweeteners was filled in by 3982 teenagers living in the District of Rome. Moreover, 362 subjects participated in a detailed food survey by recording, at brand level, all foods and beverages ingested over 12 days. Producers were asked to provide the intense sweeteners' concentration of sugar-free products. Results showed that consumer behaviour with respect to brands has an impact on exposure assessments. Only probabilistic models that took into account indicators of market share and brand loyalty met the validation criteria. PMID:14555359

  7. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    NASA Astrophysics Data System (ADS)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  8. Allocation Variable-Based Probabilistic Algorithm to Deal with Label Switching Problem in Bayesian Mixture Models

    PubMed Central

    Pan, Jia-Chiun; Liu, Chih-Min; Hwu, Hai-Gwo; Huang, Guan-Hua

    2015-01-01

    The label switching problem occurs as a result of the nonidentifiability of posterior distribution over various permutations of component labels when using Bayesian approach to estimate parameters in mixture models. In the cases where the number of components is fixed and known, we propose a relabelling algorithm, an allocation variable-based (denoted by AVP) probabilistic relabelling approach, to deal with label switching problem. We establish a model for the posterior distribution of allocation variables with label switching phenomenon. The AVP algorithm stochastically relabel the posterior samples according to the posterior probabilities of the established model. Some existing deterministic and other probabilistic algorithms are compared with AVP algorithm in simulation studies, and the success of the proposed approach is demonstrated in simulation studies and a real dataset. PMID:26458185

  9. Modelling the Reasons for Training Choices: Technical Paper. Support Document

    ERIC Educational Resources Information Center

    Smith, Andrew; Oczkowski, Eddie; Hill, Mark

    2009-01-01

    This report provides the technical details on the modelling aspects of identifying significant drivers for the reasons for using certain types of training and for the choice of training types. The employed data is from the 2005 Survey of Employer Use and Views of the VET system (SEUV). The data has previously been analysed in NCVER (2006). This…

  10. Motivating Boys to Read: Inquiry, Modeling, and Choice Matter

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2012-01-01

    A great deal of attention is being paid to the lack of reading and academic success for adolescent males. In this article, we discuss three structures in a school where boys read (and perform) as well as girls. When instruction is guided by inquiry, when teachers model their thinking while reading, and when book choices are honored, all students…

  11. Predicting Medical Specialty Choice: A Model Based on Students' Records.

    ERIC Educational Resources Information Center

    Fadem, Barbara H.; And Others

    1984-01-01

    A discriminant analysis of objective and subjective measures from the records of students who graduated from the University of Medicine and Dentistry of New Jersey-New Jersey Medical School over a six-year period was used to generate a model for the prediction of medical specialty choice. (Author/MLW)

  12. Psychophysics of time perception and intertemporal choice models

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2008-03-01

    Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.

  13. Self-Control and General Models of Choice

    ERIC Educational Resources Information Center

    Navarick, Douglas J.; Fantino, Edmund

    1976-01-01

    Given an opportunity to choose between an immediate, small reward and a delayed, large reward, pigeons may commit themselves to the large reward, but if the choice is encountered they will almost always select the immediate, small reward. This study tested a model, developed by H. Rachlin and his co-workers, concerning some general theories of…

  14. Modeling Multiple Response Processes in Judgment and Choice

    ERIC Educational Resources Information Center

    Bockenholt, Ulf

    2012-01-01

    In this article, I show how item response models can be used to capture multiple response processes in psychological applications. Intuitive and analytical responses, agree-disagree answers, response refusals, socially desirable responding, differential item functioning, and choices among multiple options are considered. In each of these cases, I…

  15. A Simplified Model of Choice Behavior under Uncertainty

    PubMed Central

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  16. A Simplified Model of Choice Behavior under Uncertainty.

    PubMed

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  17. Interneuronal mechanism for Tinbergen's hierarchical model of behavioral choice.

    PubMed

    Pirger, Zsolt; Crossley, Michael; László, Zita; Naskar, Souvik; Kemenes, György; O'Shea, Michael; Benjamin, Paul R; Kemenes, Ildikó

    2014-09-01

    Recent studies of behavioral choice support the notion that the decision to carry out one behavior rather than another depends on the reconfiguration of shared interneuronal networks [1]. We investigated another decision-making strategy, derived from the classical ethological literature [2, 3], which proposes that behavioral choice depends on competition between autonomous networks. According to this model, behavioral choice depends on inhibitory interactions between incompatible hierarchically organized behaviors. We provide evidence for this by investigating the interneuronal mechanisms mediating behavioral choice between two autonomous circuits that underlie whole-body withdrawal [4, 5] and feeding [6] in the pond snail Lymnaea. Whole-body withdrawal is a defensive reflex that is initiated by tactile contact with predators. As predicted by the hierarchical model, tactile stimuli that evoke whole-body withdrawal responses also inhibit ongoing feeding in the presence of feeding stimuli. By recording neurons from the feeding and withdrawal networks, we found no direct synaptic connections between the interneuronal and motoneuronal elements that generate the two behaviors. Instead, we discovered that behavioral choice depends on the interaction between two unique types of interneurons with asymmetrical synaptic connectivity that allows withdrawal to override feeding. One type of interneuron, the Pleuro-Buccal (PlB), is an extrinsic modulatory neuron of the feeding network that completely inhibits feeding when excited by touch-induced monosynaptic input from the second type of interneuron, Pedal-Dorsal12 (PeD12). PeD12 plays a critical role in behavioral choice by providing a synaptic pathway joining the two behavioral networks that underlies the competitive dominance of whole-body withdrawal over feeding. PMID:25155505

  18. Receptor-mediated cell attachment and detachment kinetics. I. Probabilistic model and analysis.

    PubMed Central

    Cozens-Roberts, C.; Lauffenburger, D. A.; Quinn, J. A.

    1990-01-01

    The kinetics of receptor-mediated cell adhesion to a ligand-coated surface play a key role in many physiological and biotechnology-related processes. We present a probabilistic model of receptor-ligand bond formation between a cell and surface to describe the probability of adhesion in a fluid shear field. Our model extends the deterministic model of Hammer and Lauffenburger (Hammer, D.A., and D.A. Lauffenburger. 1987. Biophys. J. 52:475-487) to a probabilistic framework, in which we calculate the probability that a certain number of bonds between a cell and surface exists at any given time. The probabilistic framework is used to account for deviations from ideal, deterministic behavior, inherent in chemical reactions involving relatively small numbers of reacting molecules. Two situations are investigated: first, cell attachment in the absence of fluid stress; and, second, cell detachment in the presence of fluid stress. In the attachment case, we examine the expected variance in bond formation as a function of attachment time; this also provides an initial condition for the detachment case. Focusing then on detachment, we predict transient behavior as a function of key system parameters, such as the distractive fluid force, the receptor-ligand bond affinity and rate constants, and the receptor and ligand densities. We compare the predictions of the probabilistic model with those of a deterministic model, and show how a deterministic approach can yield some inaccurate results; e.g., it cannot account for temporally continuous cell attach mentor detachment, it can underestimate the time needed for cell attachment, it can overestimate the time required for cell detachment for a given level of force, and it can overestimate the force necessary for cell detachment. PMID:2174271

  19. HIV-Specific Probabilistic Models of Protein Evolution

    PubMed Central

    Nickle, David C.; Heath, Laura; Jensen, Mark A.; Gilbert, Peter B.; Mullins, James I.; Kosakovsky Pond, Sergei L.

    2007-01-01

    Comparative sequence analyses, including such fundamental bioinformatics techniques as similarity searching, sequence alignment and phylogenetic inference, have become a mainstay for researchers studying type 1 Human Immunodeficiency Virus (HIV-1) genome structure and evolution. Implicit in comparative analyses is an underlying model of evolution, and the chosen model can significantly affect the results. In general, evolutionary models describe the probabilities of replacing one amino acid character with another over a period of time. Most widely used evolutionary models for protein sequences have been derived from curated alignments of hundreds of proteins, usually based on mammalian genomes. It is unclear to what extent these empirical models are generalizable to a very different organism, such as HIV-1–the most extensively sequenced organism in existence. We developed a maximum likelihood model fitting procedure to a collection of HIV-1 alignments sampled from different viral genes, and inferred two empirical substitution models, suitable for describing between-and within-host evolution. Our procedure pools the information from multiple sequence alignments, and provided software implementation can be run efficiently in parallel on a computer cluster. We describe how the inferred substitution models can be used to generate scoring matrices suitable for alignment and similarity searches. Our models had a consistently superior fit relative to the best existing models and to parameter-rich data-driven models when benchmarked on independent HIV-1 alignments, demonstrating evolutionary biases in amino-acid substitution that are unique to HIV, and that are not captured by the existing models. The scoring matrices derived from the models showed a marked difference from common amino-acid scoring matrices. The use of an appropriate evolutionary model recovered a known viral transmission history, whereas a poorly chosen model introduced phylogenetic error. We argue that

  20. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    SciTech Connect

    Peace, Gerald L.; Goering, Timothy James; Miller, Mark Laverne; Ho, Clifford Kuofei

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations when data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.

  1. Uniform Accuracy of the Maximum Likelihood Estimates for Probabilistic Models of Biological Sequences

    PubMed Central

    Ekisheva, Svetlana

    2010-01-01

    Probabilistic models for biological sequences (DNA and proteins) have many useful applications in bioinformatics. Normally, the values of parameters of these models have to be estimated from empirical data. However, even for the most common estimates, the maximum likelihood (ML) estimates, properties have not been completely explored. Here we assess the uniform accuracy of the ML estimates for models of several types: the independence model, the Markov chain and the hidden Markov model (HMM). Particularly, we derive rates of decay of the maximum estimation error by employing the measure concentration as well as the Gaussian approximation, and compare these rates. PMID:21318122

  2. Log-normal distribution based Ensemble Model Output Statistics models for probabilistic wind-speed forecasting

    NASA Astrophysics Data System (ADS)

    Baran, Sándor; Lerch, Sebastian

    2015-07-01

    Ensembles of forecasts are obtained from multiple runs of numerical weather forecasting models with different initial conditions and typically employed to account for forecast uncertainties. However, biases and dispersion errors often occur in forecast ensembles, they are usually under-dispersive and uncalibrated and require statistical post-processing. We present an Ensemble Model Output Statistics (EMOS) method for calibration of wind speed forecasts based on the log-normal (LN) distribution, and we also show a regime-switching extension of the model which combines the previously studied truncated normal (TN) distribution with the LN. Both presented models are applied to wind speed forecasts of the eight-member University of Washington mesoscale ensemble, of the fifty-member ECMWF ensemble and of the eleven-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service, and their predictive performances are compared to those of the TN and general extreme value (GEV) distribution based EMOS methods and to the TN-GEV mixture model. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison to the raw ensemble and to climatological forecasts. Further, the TN-LN mixture model outperforms the traditional TN method and its predictive performance is able to keep up with the models utilizing the GEV distribution without assigning mass to negative values.

  3. Evaluation of Behavioral Demand Models of Consumer Choice in Health Care.

    ERIC Educational Resources Information Center

    Siddharthan, Kris

    1991-01-01

    Consumer choice of health provider plan and preference for a personal physician were studied for 1,438 elderly adults using a joint logit model (JL) and a nested logit model. Choice criteria used by senior citizens, and reasons the nested choice model explains choice behavior better than the JL are examined. (SLD)

  4. Learning a Generative Probabilistic Grammar of Experience: A Process-Level Model of Language Acquisition

    ERIC Educational Resources Information Center

    Kolodny, Oren; Lotem, Arnon; Edelman, Shimon

    2015-01-01

    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…

  5. A FEASIBILITY STUDY ON USING PHYSICS-BASED MODELER OUTPUTS TO TRAIN PROBABILISTIC NEURAL NETWORKS FOR UXO CLASSIFICATION

    EPA Science Inventory

    A probabilistic neural network (PNN) has been applied to the detection and classification of unexploded ordnance (UXO) measured using magnetometry data collected using the Multi-sensor Towed Array Detection System (MTADS). Physical parameters obtained from a physics based modeler...

  6. TEMPI: probabilistic modeling time-evolving differential PPI networks with multiPle information

    PubMed Central

    Kim, Yongsoo; Jang, Jin-Hyeok; Choi, Seungjin; Hwang, Daehee

    2014-01-01

    Motivation: Time-evolving differential protein–protein interaction (PPI) networks are essential to understand serial activation of differentially regulated (up- or downregulated) cellular processes (DRPs) and their interplays over time. Despite developments in the network inference, current methods are still limited in identifying temporal transition of structures of PPI networks, DRPs associated with the structural transition and the interplays among the DRPs over time. Results: Here, we present a probabilistic model for estimating Time-Evolving differential PPI networks with MultiPle Information (TEMPI). This model describes probabilistic relationships among network structures, time-course gene expression data and Gene Ontology biological processes (GOBPs). By maximizing the likelihood of the probabilistic model, TEMPI estimates jointly the time-evolving differential PPI networks (TDNs) describing temporal transition of PPI network structures together with serial activation of DRPs associated with transiting networks. This joint estimation enables us to interpret the TDNs in terms of temporal transition of the DRPs. To demonstrate the utility of TEMPI, we applied it to two time-course datasets. TEMPI identified the TDNs that correctly delineated temporal transition of DRPs and time-dependent associations between the DRPs. These TDNs provide hypotheses for mechanisms underlying serial activation of key DRPs and their temporal associations. Availability and implementation: Source code and sample data files are available at http://sbm.postech.ac.kr/tempi/sources.zip. Contact: seungjin@postech.ac.kr or dhwang@dgist.ac.kr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161233

  7. Understanding Predisposition in College Choice: Toward an Integrated Model of College Choice and Theory of Reasoned Action

    ERIC Educational Resources Information Center

    Pitre, Paul E.; Johnson, Todd E.; Pitre, Charisse Cowan

    2006-01-01

    This article seeks to improve traditional models of college choice that draw from recruitment and enrollment management paradigms. In adopting a consumer approach to college choice, this article seeks to build upon consumer-related research, which centers on behavior and reasoning. More specifically, this article seeks to move inquiry beyond the…

  8. Modeling the impact of flexible textile composites through multiscale and probabilistic methods

    NASA Astrophysics Data System (ADS)

    Nilakantan, Gaurav

    Flexible textile composites or fabrics comprised of materials such as Kevlar are used in impact and penetration resistant structures such as protective clothing for law enforcement and military personnel. The penetration response of these fabrics is probabilistic in nature and experimentally characterized through parameters such as the V0 and the V50 velocity. In this research a probabilistic computational framework is developed through which the entire V0- V100 velocity curve or probabilistic velocity response (PVR) curve can be numerically determined through a series of finite element (FE) impact simulations. Sources of variability that affect the PVR curve are isolated for investigation, which in this study is chosen as the statistical nature of yarn tensile strengths. Experimental tensile testing is conducted on spooled and fabric-extracted Kevlar yarns. The statistically characterized strengths are then mapped onto the yarns of the fabric FE model as part of the probabilistic computational framework. The effects of projectile characteristics such as size and shape on the fabric PVR curve are studied. A multiscale modeling technique entitled the Hybrid Element Analysis (HEA) is developed to reduce the computational requirements of a fabric model based on a yarn level architecture discretized with only solid elements. This technique combines into a single FE model both a local region of solid and shell element based yarn level architecture, and a global region of shell element based membrane level architecture, with impedance matched interfaces. The multiscale model is then incorporated into the probabilistic computational framework. A yarn model comprised of a filament level architecture is developed to investigate the feasibility of solid element based homogenized yarn models as well as the effect of filament spreading and inter-filament friction on the impact response. Results from preliminary experimental fabric impact testing are also presented. This

  9. Probabilistically Constraining Age-Depth-Models of Glaciogenic Sediments

    NASA Astrophysics Data System (ADS)

    Werner, J.; van der Bilt, W.; Tingley, M.

    2015-12-01

    Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting. All of these proxies, such as measurements of tree rings, ice cores, and varved lake sediments do carry some inherent dating uncertainty that is not always fully accounted for. Considerable advances could be achieved if time uncertainties were recognized and correctly modeled, also for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty - in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space-time covariance structure of the climate to re-weight the possible age models. Werner and Tingley (2015) demonstrated how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. In their method, probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments (Werner and Tingley 2015) show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. We show how this novel method can be applied to high resolution, sub-annually sampled lacustrine sediment records to constrain their respective age depth models. The results help to quantify the signal content and extract the regionally representative signal. The single time series can then be used as the basis for a reconstruction of glacial activity. van der Bilt et al. in prep. Werner, J.P. and Tingley, M.P. Clim. Past (2015)

  10. Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis.

    SciTech Connect

    Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong; Ginzburg, Lev; Berleant, Daniel J.; Ferson, Scott; Hajagos, Janos; Nelsen, Roger B.

    2004-10-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  11. Probabilistic approach to the Bak-Sneppen model

    NASA Astrophysics Data System (ADS)

    Caldarelli, G.; Felici, M.; Gabrielli, A.; Pietronero, L.

    2002-04-01

    We study here the Bak-Sneppen model, a prototype model for the study of self-organized criticality. In this model several species interact and undergo extinction with a power-law distribution of activity bursts. Species are defined through their ``fitness'' whose distribution in the system is uniform above a certain threshold. Run time statistics is introduced for the analysis of the dynamics in order to explain the peculiar properties of the model. This approach based on conditional probability theory, takes into account the correlations due to memory effects. In this way, we may compute analytically the value of the fitness threshold with the desired precision. This represents a substantial improvement with respect to the traditional mean field approach.

  12. Probabilistic Modeling of Loran-C for nonprecision approaches

    NASA Technical Reports Server (NTRS)

    Einhorn, John K.

    1987-01-01

    The overall idea of the research was to predict the errors to be encountered during an approach using available data from the U.S. Coast Guard and standard normal distribution probability analysis for a number of airports in the North East CONUS. The research consists of two parts: an analytical model that predicts the probability of an approach falling within a given standard, and a series of flight tests designed to test the validity of the model.

  13. Structural damage measure index based on non-probabilistic reliability model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaojun; Xia, Yong; Zhou, Xiaoqing; Yang, Chen

    2014-02-01

    Uncertainties in the structural model and measurement data affect structural condition assessment in practice. As the probabilistic information of these uncertainties lacks, the non-probabilistic interval analysis framework is developed to quantify the interval of the structural element stiffness parameters. According to the interval intersection of the element stiffness parameters in the undamaged and damaged states, the possibility of damage existence is defined based on the reliability theory. A damage measure index is then proposed as the product of the nominal stiffness reduction and the defined possibility of damage existence. This new index simultaneously reflects the damage severity and possibility of damage at each structural component. Numerical and experimental examples are presented to illustrate the validity and applicability of the method. The results show that the proposed method can improve the accuracy of damage diagnosis compared with the deterministic damage identification method.

  14. Probabilistic formalism and hierarchy of models for polydispersed turbulent two-phase flows

    NASA Astrophysics Data System (ADS)

    Peirano, Eric; Minier, Jean-Pierre

    2002-04-01

    This paper deals with a probabilistic approach to polydispersed turbulent two-phase flows following the suggestions of Pozorski and Minier [Phys. Rev. E 59, 855 (1999)]. A general probabilistic formalism is presented in the form of a two-point Lagrangian PDF (probability density function). A new feature of the present approach is that both phases, the fluid as well as the particles, are included in the PDF description. It is demonstrated how the formalism can be used to show that there exists a hierarchy between the classical approaches such as the Eulerian and Lagrangian methods. It is also shown that the Eulerian and Lagrangian models can be obtained in a systematic way from the PDF formalism. Connections with previous papers are discussed.

  15. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    PubMed

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach. PMID:26470061

  16. Probabilistic earthquake early warning in complex earth models using prior sampling

    NASA Astrophysics Data System (ADS)

    Valentine, Andrew; Käufl, Paul; Trampert, Jeannot

    2016-04-01

    In an earthquake early warning (EEW) context, we must draw inferences from small, noisy seismic datasets within an extremely limited time-frame. Ideally, a probabilistic framework would be used, to recognise that available observations may be compatible with a range of outcomes, and analysis would be conducted in a theoretically-complete physical framework. However, implementing these requirements has been challenging, as they tend to increase computational demands beyond what is feasible on EEW timescales. We present a new approach, based on 'prior sampling', which implements probabilistic inversion as a two stage process, and can be used for EEW monitoring within a given region. First, a large set of synthetic data is computed for randomly-distributed seismic sources within the region. A learning algorithm is used to infer details of the probability distribution linking observations and model parameters (including location, magnitude, and focal mechanism). This procedure is computationally expensive, but can be conducted entirely before monitoring commences. In the second stage, as observations are obtained, the algorithm can be evaluated within milliseconds to output a probabilistic representation of the corresponding source model. We demonstrate that this gives robust results, and can be implemented using state-of-the-art 3D wave propagation simulations, and complex crustal structures.

  17. Urban stormwater management planning with analytical probabilistic models

    SciTech Connect

    Adams, B.J.

    2000-07-01

    Understanding how to properly manage urban stormwater is a critical concern to civil and environmental engineers the world over. Mismanagement of stormwater and urban runoff results in flooding, erosion, and water quality problems. In an effort to develop better management techniques, engineers have come to rely on computer simulation and advanced mathematical modeling techniques to help plan and predict water system performance. This important book outlines a new method that uses probability tools to model how stormwater behaves and interacts in a combined- or single-system municipal water system. Complete with sample problems and case studies illustrating how concepts really work, the book presents a cost-effective, easy-to-master approach to analytical modeling of stormwater management systems.

  18. A Coupled Probabilistic Wake Vortex and Aircraft Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Gloudemans, Thijs; Van Lochem, Sander; Ras, Eelco; Malissa, Joel; Ahmad, Nashat N.; Lewis, Timothy A.

    2016-01-01

    Wake vortex spacing standards along with weather and runway occupancy time, restrict terminal area throughput and impose major constraints on the overall capacity and efficiency of the National Airspace System (NAS). For more than two decades, the National Aeronautics and Space Administration (NASA) has been conducting research on characterizing wake vortex behavior in order to develop fast-time wake transport and decay prediction models. It is expected that the models can be used in the systems level design of advanced air traffic management (ATM) concepts that safely increase the capacity of the NAS. It is also envisioned that at a later stage of maturity, these models could potentially be used operationally, in groundbased spacing and scheduling systems as well as on the flight deck.

  19. Testing for ontological errors in probabilistic forecasting models of natural systems

    PubMed Central

    Marzocchi, Warner; Jordan, Thomas H.

    2014-01-01

    Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265

  20. Testing for ontological errors in probabilistic forecasting models of natural systems.

    PubMed

    Marzocchi, Warner; Jordan, Thomas H

    2014-08-19

    Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265

  1. Probabilistic uncertainty analysis of epidemiological modeling to guide public health intervention policy.

    PubMed

    Gilbert, Jennifer A; Meyers, Lauren Ancel; Galvani, Alison P; Townsend, Jeffrey P

    2014-03-01

    Mathematical modeling of disease transmission has provided quantitative predictions for health policy, facilitating the evaluation of epidemiological outcomes and the cost-effectiveness of interventions. However, typical sensitivity analyses of deterministic dynamic infectious disease models focus on model architecture and the relative importance of parameters but neglect parameter uncertainty when reporting model predictions. Consequently, model results that identify point estimates of intervention levels necessary to terminate transmission yield limited insight into the probability of success. We apply probabilistic uncertainty analysis to a dynamic model of influenza transmission and assess global uncertainty in outcome. We illustrate that when parameter uncertainty is not incorporated into outcome estimates, levels of vaccination and treatment predicted to prevent an influenza epidemic will only have an approximately 50% chance of terminating transmission and that sensitivity analysis alone is not sufficient to obtain this information. We demonstrate that accounting for parameter uncertainty yields probabilities of epidemiological outcomes based on the degree to which data support the range of model predictions. Unlike typical sensitivity analyses of dynamic models that only address variation in parameters, the probabilistic uncertainty analysis described here enables modelers to convey the robustness of their predictions to policy makers, extending the power of epidemiological modeling to improve public health. PMID:24593920

  2. A simple probabilistic model of submicroscopic diatom morphogenesis

    PubMed Central

    Willis, L.; Cox, E. J.; Duke, T.

    2013-01-01

    Unicellular algae called diatoms morph biomineral compounds into tough exoskeletons via complex intracellular processes about which there is much to be learned. These exoskeletons feature a rich variety of structures from submicroscale to milliscale, many that have not been reproduced in vitro. In order to help understand this complex miniature morphogenesis, here we introduce and analyse a simple model of biomineral kinetics, focusing on the exoskeleton's submicroscopic patterned planar structures called pore occlusions. The model reproduces most features of these pore occlusions by retuning just one parameter, thereby indicating what physio-biochemical mechanisms could sufficiently explain morphogenesis at the submicroscopic scale: it is sufficient to identify a mechanism of lateral negative feedback on the biomineral reaction kinetics. The model is nonlinear and stochastic; it is an extended version of the threshold voter model. Its mean-field equation provides a simple and, as far as the authors are aware, new way of mapping out the spatial patterns produced by lateral inhibition and variants thereof. PMID:23554345

  3. Web-tool to Support Medical Experts in Probabilistic Modelling Using Large Bayesian Networks With an Example of Hinosinusitis.

    PubMed

    Cypko, Mario A; Hirsch, David; Koch, Lucas; Stoehr, Matthaeus; Strauss, Gero; Denecke, Kerstin

    2015-01-01

    For many complex diseases, finding the best patient-specific treatment decision is difficult for physicians due to limited mental capacity. Clinical decision support systems based on Bayesian networks (BN) can provide a probabilistic graphical model integrating all necessary aspects relevant for decision making. Such models are often manually created by clinical experts. The modeling process consists of graphical modeling conducted by collecting of information entities, and probabilistic modeling achieved through defining the relations of information entities to their direct causes. Such expert-based probabilistic modelling with BNs is very time intensive and requires knowledge about the underlying modeling method. We introduce in this paper an intuitive web-based system for helping medical experts generate decision models based on BNs. Using the tool, no special knowledge about the underlying model or BN is necessary. We tested the tool with an example of modeling treatment decisions of Rhinosinusitis and studied its usability. PMID:26262051

  4. Rock penetration : finite element sensitivity and probabilistic modeling analyses.

    SciTech Connect

    Fossum, Arlo Frederick

    2004-08-01

    This report summarizes numerical analyses conducted to assess the relative importance on penetration depth calculations of rock constitutive model physics features representing the presence of microscale flaws such as porosity and networks of microcracks and rock mass structural features. Three-dimensional, nonlinear, transient dynamic finite element penetration simulations are made with a realistic geomaterial constitutive model to determine which features have the most influence on penetration depth calculations. A baseline penetration calculation is made with a representative set of material parameters evaluated from measurements made from laboratory experiments conducted on a familiar sedimentary rock. Then, a sequence of perturbations of various material parameters allows an assessment to be made of the main penetration effects. A cumulative probability distribution function is calculated with the use of an advanced reliability method that makes use of this sensitivity database, probability density functions, and coefficients of variation of the key controlling parameters for penetration depth predictions. Thus the variability of the calculated penetration depth is known as a function of the variability of the input parameters. This simulation modeling capability should impact significantly the tools that are needed to design enhanced penetrator systems, support weapons effects studies, and directly address proposed HDBT defeat scenarios.

  5. Probabilistic graphical models to deal with age estimation of living persons.

    PubMed

    Sironi, Emanuele; Gallidabino, Matteo; Weyermann, Céline; Taroni, Franco

    2016-03-01

    Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed. PMID:25794687

  6. Probabilistic modelling of European consumer exposure to cosmetic products.

    PubMed

    McNamara, C; Rohan, D; Golden, D; Gibney, M; Hall, B; Tozer, S; Safford, B; Coroama, M; Leneveu-Duchemin, M C; Steiling, W

    2007-11-01

    In this study, we describe the statistical analysis of the usage profile of the European population to seven cosmetic products. The aim of the study was to construct a reliable model of exposure of the European population from use of the selected products: body lotion, shampoo, deodorant spray, deodorant non-spray, facial moisturiser, lipstick and toothpaste. The first step in this process was to gather reliable data on consumer usage patterns of the products. These data were sourced from a combination of market information databases and a controlled product use study by the trade association Colipa. The market information study contained a large number of subjects, in total 44,100 households and 18,057 habitual users (males and females) of the studied products, in five European countries. The data sets were then combined to generate a realistic distribution of frequency of use of each product, combined with distribution of the amount of product used at each occasion using the CREMe software. A Monte Carlo method was used to combine the data sets. This resulted in a new model of European exposure to cosmetic products being constructed. PMID:17804138

  7. Probabilistic models of genetic variation in structured populations applied to global human studies

    PubMed Central

    Hao, Wei; Song, Minsun; Storey, John D.

    2016-01-01

    Motivation: Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important problem is how to formulate and estimate probabilistic models of observed genotypes that account for complex population structure. The most prominent work on this problem has focused on estimating a model of admixture proportions of ancestral populations for each individual. Here, we instead focus on modeling variation of the genotypes without requiring a higher-level admixture interpretation. Results: We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis can be utilized to estimate a general model that includes the well-known Pritchard–Stephens–Donnelly admixture model as a special case. Noting some drawbacks of this approach, we introduce a new ‘logistic factor analysis’ framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the Human Genome Diversity Panel and 1000 Genomes Project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions. Availability and Implementation: A Bioconductor R package called lfa is available at http://www.bioconductor.org/packages/release/bioc/html/lfa.html. Contact: jstorey@princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26545820

  8. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2008-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  9. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2008-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points, the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  10. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2010-01-01

    Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required