Heterogeneous Factor Analysis Models: A Bayesian Approach.
ERIC Educational Resources Information Center
Ansari, Asim; Jedidi, Kamel; Dube, Laurette
2002-01-01
Developed Markov Chain Monte Carlo procedures to perform Bayesian inference, model checking, and model comparison in heterogeneous factor analysis. Tested the approach with synthetic data and data from a consumption emotion study involving 54 consumers. Results show that traditional psychometric methods cannot fully capture the heterogeneity in…
A Bayesian Shrinkage Approach for AMMI Models
de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
Merging Digital Surface Models Implementing Bayesian Approaches
NASA Astrophysics Data System (ADS)
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Stochastic model updating utilizing Bayesian approach and Gaussian process model
NASA Astrophysics Data System (ADS)
Wan, Hua-Ping; Ren, Wei-Xin
2016-03-01
Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.
A Bayesian Approach for Analyzing Longitudinal Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum
2011-01-01
This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…
Bayesian non-parametrics and the probabilistic approach to modelling
Ghahramani, Zoubin
2013-01-01
Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609
A Bayesian approach for the multiplicative binomial regression model
NASA Astrophysics Data System (ADS)
Paraíba, Carolina C. M.; Diniz, Carlos A. R.; Pires, Rubiane M.
2012-10-01
In the present paper, we focus our attention on Altham's multiplicative binomial model under the Bayesian perspective, modeling both the probability of success and the dispersion parameters. We present results based on a simulated data set to access the quality of Bayesian estimates and Bayesian diagnostic for model assessment.
Diagnosing Hybrid Systems: a Bayesian Model Selection Approach
NASA Technical Reports Server (NTRS)
McIlraith, Sheila A.
2005-01-01
In this paper we examine the problem of monitoring and diagnosing noisy complex dynamical systems that are modeled as hybrid systems-models of continuous behavior, interleaved by discrete transitions. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. Building on our previous work in this area (MBCG99;MBCG00), our specific focus in this paper ins on the mathematical formulation of the hybrid monitoring and diagnosis task as a Bayesian model tracking algorithm. The nonlinear dynamics of many hybrid systems present challenges to probabilistic tracking. Further, probabilistic tracking of a system for the purposes of diagnosis is problematic because the models of the system corresponding to failure modes are numerous and generally very unlikely. To focus tracking on these unlikely models and to reduce the number of potential models under consideration, we exploit logic-based techniques for qualitative model-based diagnosis to conjecture a limited initial set of consistent candidate models. In this paper we discuss alternative tracking techniques that are relevant to different classes of hybrid systems, focusing specifically on a method for tracking multiple models of nonlinear behavior simultaneously using factored sampling and conditional density propagation. To illustrate and motivate the approach described in this paper we examine the problem of monitoring and diganosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
Bayesian approach for flexible modeling of semicompeting risks data.
Han, Baoguang; Yu, Menggang; Dignam, James J; Rathouz, Paul J
2014-12-20
Semicompeting risks data arise when two types of events, non-terminal and terminal, are observed. When the terminal event occurs first, it censors the non-terminal event, but not vice versa. To account for possible dependent censoring of the non-terminal event by the terminal event and to improve prediction of the terminal event using the non-terminal event information, it is crucial to model their association properly. Motivated by a breast cancer clinical trial data analysis, we extend the well-known illness-death models to allow flexible random effects to capture heterogeneous association structures in the data. Our extension also represents a generalization of the popular shared frailty models that usually assume that the non-terminal event does not affect the hazards of the terminal event beyond a frailty term. We propose a unified Bayesian modeling approach that can utilize existing software packages for both model fitting and individual-specific event prediction. The approach is demonstrated via both simulation studies and a breast cancer data set analysis. PMID:25274445
Bayesian approach to neural-network modeling with input uncertainty.
Wright, W A
1999-01-01
It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise or corruption. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural-network framework which allows for input noise provided that some model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method gives an additional term to the usual Bayesian error bar which depends on the variance of the input noise process. Further by treating the true (noiseless) input as a hidden variable and sampling this jointly with the network's weights, using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input.
Nonlinear regression modeling of nutrient loads in streams: A Bayesian approach
Qian, S.S.; Reckhow, K.H.; Zhai, J.; McMahon, G.
2005-01-01
A Bayesian nonlinear regression modeling method is introduced and compared with the least squares method for modeling nutrient loads in stream networks. The objective of the study is to better model spatial correlation in river basin hydrology and land use for improving the model as a forecasting tool. The Bayesian modeling approach is introduced in three steps, each with a more complicated model and data error structure. The approach is illustrated using a data set from three large river basins in eastern North Carolina. Results indicate that the Bayesian model better accounts for model and data uncertainties than does the conventional least squares approach. Applications of the Bayesian models for ambient water quality standards compliance and TMDL assessment are discussed. Copyright 2005 by the American Geophysical Union.
Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach
NASA Astrophysics Data System (ADS)
Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.
2010-12-01
Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial
A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; Meijer, Rob R.
A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…
A Bayesian approach to biokinetic models of internally- deposited radionuclides
NASA Astrophysics Data System (ADS)
Amer, Mamun F.
Bayesian methods were developed and applied to estimate parameters of biokinetic models of internally deposited radionuclides for the first time. Marginal posterior densities for the parameters, given the available data, were obtained and graphed. These densities contain all the information available about the parameters and fully describe their uncertainties. Two different numerical integration methods were employed to approximate the multi-dimensional integrals needed to obtain these densities and to verify our results. One numerical method was based on Gaussian quadrature. The other method was a lattice rule that was developed by Conroy. The lattice rule method is applied here for the first time in conjunction with Bayesian analysis. Computer codes were developed in Mathematica's own programming language to perform the integrals. Several biokinetic models were studied. The first model was a single power function, a/ t-b that was used to describe 226Ra whole body retention data for long periods of time in many patients. The posterior odds criterion for model identification was applied to select, from among some competing models, the best model to represent 226Ra retention in man. The highest model posterior was attained by the single power function. Posterior densities for the model parameters were obtained for each patient. Also, predictive densities for retention, given the available retention values and some selected times, were obtained. These predictive densities characterize the uncertainties in the unobservable retention values taking into consideration the uncertainties of other parameters in the model. The second model was a single exponential function, α e-/beta t, that was used to represent one patient's whole body retention as well as total excretion of 137Cs. Missing observations (censored data) in the two responses were replaced by unknown parameters and were handled in the same way other model parameters are treated. By applying the Bayesian
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods
ERIC Educational Resources Information Center
Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich
2013-01-01
The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…
Medical Inpatient Journey Modeling and Clustering: A Bayesian Hidden Markov Model Based Approach
Huang, Zhengxing; Dong, Wei; Wang, Fei; Duan, Huilong
2015-01-01
Modeling and clustering medical inpatient journeys is useful to healthcare organizations for a number of reasons including inpatient journey reorganization in a more convenient way for understanding and browsing, etc. In this study, we present a probabilistic model-based approach to model and cluster medical inpatient journeys. Specifically, we exploit a Bayesian Hidden Markov Model based approach to transform medical inpatient journeys into a probabilistic space, which can be seen as a richer representation of inpatient journeys to be clustered. Then, using hierarchical clustering on the matrix of similarities, inpatient journeys can be clustered into different categories w.r.t their clinical and temporal characteristics. We evaluated the proposed approach on a real clinical data set pertaining to the unstable angina treatment process. The experimental results reveal that our method can identify and model latent treatment topics underlying in personalized inpatient journeys, and yield impressive clustering quality. PMID:26958200
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan; Cai, Jing-Heng
2010-01-01
Analysis of ordered binary and unordered binary data has received considerable attention in social and psychological research. This article introduces a Bayesian approach, which has several nice features in practical applications, for analyzing nonlinear structural equation models with dichotomous data. We demonstrate how to use the software…
A Robust Bayesian Approach for Structural Equation Models with Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum; Xia, Ye-Mao
2008-01-01
In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear…
Equifinality of formal (DREAM) and informal (GLUE) bayesian approaches in hydrologic modeling?
Vrugt, Jasper A; Robinson, Bruce A; Ter Braak, Cajo J F; Gupta, Hoshin V
2008-01-01
In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented using the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchments.
Sequential Bayesian Detection: A Model-Based Approach
Sullivan, E J; Candy, J V
2007-08-13
Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.
Sequential Bayesian Detection: A Model-Based Approach
Candy, J V
2008-12-08
Sequential detection theory has been known for a long time evolving in the late 1940's by Wald and followed by Middleton's classic exposition in the 1960's coupled with the concurrent enabling technology of digital computer systems and the development of sequential processors. Its development, when coupled to modern sequential model-based processors, offers a reasonable way to attack physics-based problems. In this chapter, the fundamentals of the sequential detection are reviewed from the Neyman-Pearson theoretical perspective and formulated for both linear and nonlinear (approximate) Gauss-Markov, state-space representations. We review the development of modern sequential detectors and incorporate the sequential model-based processors as an integral part of their solution. Motivated by a wealth of physics-based detection problems, we show how both linear and nonlinear processors can seamlessly be embedded into the sequential detection framework to provide a powerful approach to solving non-stationary detection problems.
A Bayesian approach to the semi-analytic model of galaxy formation
NASA Astrophysics Data System (ADS)
Lu, Yu
It is believed that a wide range of physical processes conspire to shape the observed galaxy population but it remains unsure of their detailed interactions. The semi-analytic model (SAM) of galaxy formation uses multi-dimensional parameterizations of the physical processes of galaxy formation and provides a tool to constrain these underlying physical interactions. Because of the high dimensionality and large uncertainties in the model, the parametric problem of galaxy formation can be profitably tackled with a Bayesian-inference based approach, which allows one to constrain theory with data in a statistically rigorous way. In this thesis, I present a newly developed method to build SAM upon the framework of Bayesian inference. I show that, aided by advanced Markov-Chain Monte-Carlo algorithms, the method has the power to efficiently combine information from diverse data sources, rigorously establish confidence bounds on model parameters, and provide powerful probability-based methods for hypothesis test. Using various data sets (stellar mass function, conditional stellar mass function, K-band luminosity function, and cold gas mass functions) of galaxies in the local Universe, I carry out a series of Bayesian model inferences. The results show that SAM contains huge degeneracies among its parameters, indicating that some of the conclusions drawn previously with the conventional approach may not be truly valid but need to be revisited by the Bayesian approach. Second, some of the degeneracy of the model can be broken by adopting multiple data sets that constrain different aspects of the galaxy population. Third, the inferences reveal that model has challenge to simultaneously explain some important observational results, suggesting that some key physics governing the evolution of star formation and feedback may still be missing from the model. These analyses show clearly that the Bayesian inference based SAM can be used to perform systematic and statistically
Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.
Patri, Jean-François; Diard, Julien; Perrier, Pascal
2015-12-01
The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way. PMID:26497359
Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.
Patri, Jean-François; Diard, Julien; Perrier, Pascal
2015-12-01
The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.
An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit
Wong, Rowena Syn Yin; Ismail, Noor Azina
2016-01-01
Background and Objectives There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. Methods This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. Results The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Conclusion Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of
IONONEST—A Bayesian approach to modeling the lower ionosphere
NASA Astrophysics Data System (ADS)
Martin, Poppy L.; Scaife, Anna M. M.; McKay, Derek; McCrea, Ian
2016-08-01
Obtaining high-resolution electron density height profiles for the D region of the ionosphere as a well-sampled function of time is difficult for most methods of ionospheric measurement. Here we present a new method of using multifrequency riometry data for producing D region height profiles via inverse methods. To obtain these profiles, we use the nested sampling technique, implemented through our code, IONONEST. We demonstrate this approach using new data from the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA) instrument and consider two electron density models. We compare the recovered height profiles from the KAIRA data with those from incoherent scatter radar using data from the European Incoherent Scatter Facility (EISCAT) instrument and find that there is good agreement between the two techniques, allowing for instrumental differences.
Modelling household finances: A Bayesian approach to a multivariate two-part model
Brown, Sarah; Ghosh, Pulak; Su, Li; Taylor, Karl
2016-01-01
We contribute to the empirical literature on household finances by introducing a Bayesian multivariate two-part model, which has been developed to further our understanding of household finances. Our flexible approach allows for the potential interdependence between the holding of assets and liabilities at the household level and also encompasses a two-part process to allow for differences in the influences on asset or liability holding and on the respective amounts held. Furthermore, the framework is dynamic in order to allow for persistence in household finances over time. Our findings endorse the joint modelling approach and provide evidence supporting the importance of dynamics. In addition, we find that certain independent variables exert different influences on the binary and continuous parts of the model thereby highlighting the flexibility of our framework and revealing a detailed picture of the nature of household finances. PMID:27212801
Bayesian approaches to spatial inference: Modelling and computational challenges and solutions
NASA Astrophysics Data System (ADS)
Moores, Matthew; Mengersen, Kerrie
2014-12-01
We discuss a range of Bayesian modelling approaches for spatial data and investigate some of the associated computational challenges. This paper commences with a brief review of Bayesian mixture models and Markov random fields, with enabling computational algorithms including Markov chain Monte Carlo (MCMC) and integrated nested Laplace approximation (INLA). Following this, we focus on the Potts model as a canonical approach, and discuss the challenge of estimating the inverse temperature parameter that controls the degree of spatial smoothing. We compare three approaches to addressing the doubly intractable nature of the likelihood, namely pseudo-likelihood, path sampling and the exchange algorithm. These techniques are applied to satellite data used to analyse water quality in the Great Barrier Reef.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2013-04-01
In recent years, a strong debate has emerged in the hydrologic literature how to properly treat non-traditional error residual distributions and quantify parameter and predictive uncertainty. Particularly, there is strong disagreement whether such uncertainty framework should have its roots within a proper statistical (Bayesian) context using Markov chain Monte Carlo (MCMC) simulation techniques, or whether such a framework should be based on a quite different philosophy and implement informal likelihood functions and simplistic search methods to summarize parameter and predictive distributions. In this paper we introduce an alternative framework, called Approximate Bayesian Computation (ABC) that summarizes the differing viewpoints of formal and informal Bayesian approaches. This methodology has recently emerged in the fields of biology and population genetics and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics that measure the distance of each model simulation to the data. This paper is a follow up of the recent publication of Nott et al. (2012) and further studies the theoretical and numerical equivalence of formal (DREAM) and informal (GLUE) Bayesian approaches using data from different watersheds in the United States. We demonstrate that the limits of acceptability approach of GLUE is a special variant of ABC in which each discharge observation of the calibration data set is used as a summary diagnostic.
A Bayesian approach for inducing sparsity in generalized linear models with multi-category response
2015-01-01
Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.
2014-12-01
Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.
An empirical Bayesian approach for model-based inference of cellular signaling networks
2009-01-01
Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF) signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements. PMID:19900289
Model of Conceptual Change for INQPRO: A Bayesian Network Approach
ERIC Educational Resources Information Center
Ting, Choo-Yee; Sam, Yok-Cheng; Wong, Chee-Onn
2013-01-01
Constructing a computational model of conceptual change for a computer-based scientific inquiry learning environment is difficult due to two challenges: (i) externalizing the variables of conceptual change and its related variables is difficult. In addition, defining the causal dependencies among the variables is also not trivial. Such difficulty…
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist.
Lifting a veil on diversity: a Bayesian approach to fitting relative-abundance models.
Golicher, Duncan J; O'Hara, Robert B; Ruíz-Montoya, Lorena; Cayuela, Luis
2006-02-01
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation. PMID:16705973
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.
Fronczyk, Kassandra; Kottas, Athanasios
2014-03-01
We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. PMID:24354490
A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models
Matthias C. M. Troffaes; Gero Walter; Dana Kelly
2014-05-01
In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.
A Bayesian Modelling Approach with Balancing Informative Prior for Analysing Imbalanced Data.
Klein, Kerenaftali; Hennig, Stefanie; Paul, Sanjoy Ketan
2016-01-01
When a dataset is imbalanced, the prediction of the scarcely-sampled subpopulation can be over-influenced by the population contributing to the majority of the data. The aim of this study was to develop a Bayesian modelling approach with balancing informative prior so that the influence of imbalance to the overall prediction could be minimised. The new approach was developed in order to weigh the data in favour of the smaller subset(s). The method was assessed in terms of bias and precision in predicting model parameter estimates of simulated datasets. Moreover, the method was evaluated in predicting optimal dose levels of tobramycin for various age groups in a motivating example. The bias estimates using the balancing informative prior approach were smaller than those generated using the conventional approach which was without the consideration for the imbalance in the datasets. The precision estimates were also superior. The method was further evaluated in a motivating example of optimal dosage prediction of tobramycin. The resulting predictions also agreed well with what had been reported in the literature. The proposed Bayesian balancing informative prior approach has shown a real potential to adequately weigh the data in favour of smaller subset(s) of data to generate robust prediction models. PMID:27070549
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range
NASA Astrophysics Data System (ADS)
Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel
2016-04-01
Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when
Inference on the Univariate Frailty Model: A Bayesian Reference Analysis Approach
NASA Astrophysics Data System (ADS)
Tomazella, Vera Lucia D.; Martins, Camila Bertini; Bernardo, Jose Miguel
2008-11-01
In this work we present an approach involving objective Bayesian reference analysis to the Frailty model with univariate survival time and sources of heterogeneity that are not captured by covariates. The derivation unconditional hazard and survival leads to the Lomax distribution, also known as the Pareto distribution of the second kind. This distribution has an important position in life testing to adjust data from business failures. Reference analysis, introduced by Bernardo (1979) produce a new solution for this problem. The results are illustrated with survival data analyzed in the literature and simulated data.
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded.
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded. PMID:26847949
ERIC Educational Resources Information Center
West, Patti; Rutstein, Daisy Wise; Mislevy, Robert J.; Liu, Junhui; Choi, Younyoung; Levy, Roy; Crawford, Aaron; DiCerbo, Kristen E.; Chappel, Kristina; Behrens, John T.
2010-01-01
A major issue in the study of learning progressions (LPs) is linking student performance on assessment tasks to the progressions. This report describes the challenges faced in making this linkage using Bayesian networks to model LPs in the field of computer networking. The ideas are illustrated with exemplar Bayesian networks built on Cisco…
Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente
2016-08-01
In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population
Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente
2016-08-01
In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population
Franck, Christopher T; Koffarnus, Mikhail N; House, Leanna L; Bickel, Warren K
2015-01-01
The study of delay discounting, or valuation of future rewards as a function of delay, has contributed to understanding the behavioral economics of addiction. Accurate characterization of discounting can be furthered by statistical model selection given that many functions have been proposed to measure future valuation of rewards. The present study provides a convenient Bayesian model selection algorithm that selects the most probable discounting model among a set of candidate models chosen by the researcher. The approach assigns the most probable model for each individual subject. Importantly, effective delay 50 (ED50) functions as a suitable unifying measure that is computable for and comparable between a number of popular functions, including both one- and two-parameter models. The combined model selection/ED50 approach is illustrated using empirical discounting data collected from a sample of 111 undergraduate students with models proposed by Laibson (1997); Mazur (1987); Myerson & Green (1995); Rachlin (2006); and Samuelson (1937). Computer simulation suggests that the proposed Bayesian model selection approach outperforms the single model approach when data truly arise from multiple models. When a single model underlies all participant data, the simulation suggests that the proposed approach fares no worse than the single model approach.
Jointly modeling time-to-event and longitudinal data: A Bayesian approach.
Huang, Yangxin; Hu, X Joan; Dagne, Getachew A
2014-03-01
This article explores Bayesian joint models of event times and longitudinal measures with an attempt to overcome departures from normality of the longitudinal response, measurement errors, and shortages of confidence in specifying a parametric time-to-event model. We allow the longitudinal response to have a skew distribution in the presence of measurement errors, and assume the time-to-event variable to have a nonparametric prior distribution. Posterior distributions of the parameters are attained simultaneously for inference based on Bayesian approach. An example from a recent AIDS clinical trial illustrates the methodology by jointly modeling the viral dynamics and the time to decrease in CD4/CD8 ratio in the presence of CD4 counts with measurement errors and to compare potential models with various scenarios and different distribution specifications. The analysis outcome indicates that the time-varying CD4 covariate is closely related to the first-phase viral decay rate, but the time to CD4/CD8 decrease is not highly associated with either the two viral decay rates or the CD4 changing rate over time. These findings may provide some quantitative guidance to better understand the relationship of the virological and immunological responses to antiretroviral treatments. PMID:24611039
Xu, Chengcheng; Wang, Wei; Liu, Pan; Li, Zhibin
2015-12-01
This study aimed to develop a real-time crash risk model with limited data in China by using Bayesian meta-analysis and Bayesian inference approach. A systematic review was first conducted by using three different Bayesian meta-analyses, including the fixed effect meta-analysis, the random effect meta-analysis, and the meta-regression. The meta-analyses provided a numerical summary of the effects of traffic variables on crash risks by quantitatively synthesizing results from previous studies. The random effect meta-analysis and the meta-regression produced a more conservative estimate for the effects of traffic variables compared with the fixed effect meta-analysis. Then, the meta-analyses results were used as informative priors for developing crash risk models with limited data. Three different meta-analyses significantly affect model fit and prediction accuracy. The model based on meta-regression can increase the prediction accuracy by about 15% as compared to the model that was directly developed with limited data. Finally, the Bayesian predictive densities analysis was used to identify the outliers in the limited data. It can further improve the prediction accuracy by 5.0%.
Wendling, Thierry; Dumitras, Swati; Ogungbenro, Kayode; Aarons, Leon
2015-12-01
Mavoglurant (MVG) is an antagonist at the metabotropic glutamate receptor-5 currently under clinical development at Novartis Pharma AG for the treatment of central nervous system diseases. The aim of this study was to develop and optimise a population whole-body physiologically-based pharmacokinetic (WBPBPK) model for MVG, to predict the impact of drug-drug interaction (DDI) and age on its pharmacokinetics. In a first step, the model was fitted to intravenous (IV) data from a clinical study in adults using a Bayesian approach. In a second step, the optimised model was used together with a mechanistic absorption model for exploratory Monte Carlo simulations. The ability of the model to predict MVG pharmacokinetics when orally co-administered with ketoconazole in adults or administered alone in 3-11 year-old children was evaluated using data from three other clinical studies. The population model provided a good description of both the median trend and variability in MVG plasma pharmacokinetics following IV administration in adults. The Bayesian approach offered a continuous flow of information from pre-clinical to clinical studies. Prediction of the DDI with ketoconazole was consistent with the results of a non-compartmental analysis of the clinical data (threefold increase in systemic exposure). Scaling of the WBPBPK model allowed reasonable extrapolation of MVG pharmacokinetics from adults to children. The model can be used to predict plasma and brain (target site) concentration-time profiles following oral administration of various immediate-release formulations of MVG alone or when co-administered with other drugs, in adults as well as in children. PMID:26231433
Wendling, Thierry; Dumitras, Swati; Ogungbenro, Kayode; Aarons, Leon
2015-12-01
Mavoglurant (MVG) is an antagonist at the metabotropic glutamate receptor-5 currently under clinical development at Novartis Pharma AG for the treatment of central nervous system diseases. The aim of this study was to develop and optimise a population whole-body physiologically-based pharmacokinetic (WBPBPK) model for MVG, to predict the impact of drug-drug interaction (DDI) and age on its pharmacokinetics. In a first step, the model was fitted to intravenous (IV) data from a clinical study in adults using a Bayesian approach. In a second step, the optimised model was used together with a mechanistic absorption model for exploratory Monte Carlo simulations. The ability of the model to predict MVG pharmacokinetics when orally co-administered with ketoconazole in adults or administered alone in 3-11 year-old children was evaluated using data from three other clinical studies. The population model provided a good description of both the median trend and variability in MVG plasma pharmacokinetics following IV administration in adults. The Bayesian approach offered a continuous flow of information from pre-clinical to clinical studies. Prediction of the DDI with ketoconazole was consistent with the results of a non-compartmental analysis of the clinical data (threefold increase in systemic exposure). Scaling of the WBPBPK model allowed reasonable extrapolation of MVG pharmacokinetics from adults to children. The model can be used to predict plasma and brain (target site) concentration-time profiles following oral administration of various immediate-release formulations of MVG alone or when co-administered with other drugs, in adults as well as in children.
NASA Astrophysics Data System (ADS)
Stephenson, John; Gallagher, Kerry; Holmes, Chris
2006-10-01
We present a new approach for modelling annealing of fission tracks in apatite, aiming to address various problems with existing models. We cast the model in a fully Bayesian context, which allows us explicitly to deal with data and parameter uncertainties and correlations, and also to deal with the predictive uncertainties. We focus on a well-known annealing algorithm [Laslett, G.M., Green, P.F., Duddy, I.R., Gleadow. A.J.W., 1987. Thermal annealing of fission tracks in apatite. 2. A quantitative-analysis. Chem. Geol., 65 (1), 1-13], and build a hierachical Bayesian model to incorporate both laboratory and geological timescale data as direct constraints. Relative to the original model calibration, we find a better (in terms of likelihood) model conditioned just on the reported laboratory data. We then include the uncertainty on the temperatures recorded during the laboratory annealing experiments. We again find a better model, but the predictive uncertainty when extrapolated to geological timescales is increased due to the uncertainty on the laboratory temperatures. Finally, we explictly include a data set [Vrolijk, P., Donelick, R.A., Quenq, J., Cloos. M., 1992. Testing models of fission track annealing in apatite in a simple thermal setting: site 800, leg 129. In: Larson, R., Lancelet, Y. (Eds.), Proceedings of the Ocean Drilling Program, Scientific Results, vol. 129, pp. 169-176] which provides low-temperature geological timescale constraints for the model calibration. When combined with the laboratory data, we find a model which satisfies both the low-temperature and high-temperature geological timescale benchmarks, although the fit to the original laboratory data is degraded. However, when extrapolated to geological timescales, this combined model significantly reduces the well-known rapid recent cooling artifact found in many published thermal models for geological samples.
A Bayesian approach for Li-Ion battery capacity fade modeling and cycles to failure prognostics
NASA Astrophysics Data System (ADS)
Guo, Jian; Li, Zhaojun; Pecht, Michael
2015-05-01
Battery capacity fade occurs when battery capacity, measured in Ampere-hours, degrades over the number of charge/discharge cycles. This is a comprehensive result of various factors, including irreversible electrochemical reactions that form a solid electrolyte interphase (SEI) in the negative electrode and oxidative reactions of the positive electrode. The degradation mechanism is further complicated by operational and environmental factors such as discharge rate, usage and storage temperature, as well as cell-level and battery pack-level variations carried over from the manufacturing processes. This research investigates a novel Bayesian method to model battery capacity fade over repetitive cycles by considering both within-battery and between-battery variations. Physics-based covariates are integrated with functional forms for modeling the capacity fade. A systematic approach based on covariate identification, model selection, and a strategy for prognostics data selection is presented. The proposed Bayesian method is capable of quantifying the uncertainties in predicting battery capacity/power fade and end-of-life cycles to failure distribution under various operating conditions.
A Bayesian approach for temporally scaling climate for modeling ecological systems.
Post van der Burg, Max; Anteau, Michael J; McCauley, Lisa A; Wiltermuth, Mark T
2016-05-01
With climate change becoming more of concern, many ecologists are including climate variables in their system and statistical models. The Standardized Precipitation Evapotranspiration Index (SPEI) is a drought index that has potential advantages in modeling ecological response variables, including a flexible computation of the index over different timescales. However, little development has been made in terms of the choice of timescale for SPEI. We developed a Bayesian modeling approach for estimating the timescale for SPEI and demonstrated its use in modeling wetland hydrologic dynamics in two different eras (i.e., historical [pre-1970] and contemporary [post-2003]). Our goal was to determine whether differences in climate between the two eras could explain changes in the amount of water in wetlands. Our results showed that wetland water surface areas tended to be larger in wetter conditions, but also changed less in response to climate fluctuations in the contemporary era. We also found that the average timescale parameter was greater in the historical period, compared with the contemporary period. We were not able to determine whether this shift in timescale was due to a change in the timing of wet-dry periods or whether it was due to changes in the way wetlands responded to climate. Our results suggest that perhaps some interaction between climate and hydrologic response may be at work, and further analysis is needed to determine which has a stronger influence. Despite this, we suggest that our modeling approach enabled us to estimate the relevant timescale for SPEI and make inferences from those estimates. Likewise, our approach provides a mechanism for using prior information with future data to assess whether these patterns may continue over time. We suggest that ecologists consider using temporally scalable climate indices in conjunction with Bayesian analysis for assessing the role of climate in ecological systems. PMID:27217947
A Bayesian approach for temporally scaling climate for modeling ecological systems.
Post van der Burg, Max; Anteau, Michael J; McCauley, Lisa A; Wiltermuth, Mark T
2016-05-01
With climate change becoming more of concern, many ecologists are including climate variables in their system and statistical models. The Standardized Precipitation Evapotranspiration Index (SPEI) is a drought index that has potential advantages in modeling ecological response variables, including a flexible computation of the index over different timescales. However, little development has been made in terms of the choice of timescale for SPEI. We developed a Bayesian modeling approach for estimating the timescale for SPEI and demonstrated its use in modeling wetland hydrologic dynamics in two different eras (i.e., historical [pre-1970] and contemporary [post-2003]). Our goal was to determine whether differences in climate between the two eras could explain changes in the amount of water in wetlands. Our results showed that wetland water surface areas tended to be larger in wetter conditions, but also changed less in response to climate fluctuations in the contemporary era. We also found that the average timescale parameter was greater in the historical period, compared with the contemporary period. We were not able to determine whether this shift in timescale was due to a change in the timing of wet-dry periods or whether it was due to changes in the way wetlands responded to climate. Our results suggest that perhaps some interaction between climate and hydrologic response may be at work, and further analysis is needed to determine which has a stronger influence. Despite this, we suggest that our modeling approach enabled us to estimate the relevant timescale for SPEI and make inferences from those estimates. Likewise, our approach provides a mechanism for using prior information with future data to assess whether these patterns may continue over time. We suggest that ecologists consider using temporally scalable climate indices in conjunction with Bayesian analysis for assessing the role of climate in ecological systems.
A Bayesian approach for temporally scaling climate for modeling ecological systems
Post van der Burg, Max; Anteau, Michael J.; McCauley, Lisa A.; Wiltermuth, Mark T.
2016-01-01
With climate change becoming more of concern, many ecologists are including climate variables in their system and statistical models. The Standardized Precipitation Evapotranspiration Index (SPEI) is a drought index that has potential advantages in modeling ecological response variables, including a flexible computation of the index over different timescales. However, little development has been made in terms of the choice of timescale for SPEI. We developed a Bayesian modeling approach for estimating the timescale for SPEI and demonstrated its use in modeling wetland hydrologic dynamics in two different eras (i.e., historical [pre-1970] and contemporary [post-2003]). Our goal was to determine whether differences in climate between the two eras could explain changes in the amount of water in wetlands. Our results showed that wetland water surface areas tended to be larger in wetter conditions, but also changed less in response to climate fluctuations in the contemporary era. We also found that the average timescale parameter was greater in the historical period, compared with the contemporary period. We were not able to determine whether this shift in timescale was due to a change in the timing of wet–dry periods or whether it was due to changes in the way wetlands responded to climate. Our results suggest that perhaps some interaction between climate and hydrologic response may be at work, and further analysis is needed to determine which has a stronger influence. Despite this, we suggest that our modeling approach enabled us to estimate the relevant timescale for SPEI and make inferences from those estimates. Likewise, our approach provides a mechanism for using prior information with future data to assess whether these patterns may continue over time. We suggest that ecologists consider using temporally scalable climate indices in conjunction with Bayesian analysis for assessing the role of climate in ecological systems.
Using Bayesian statistical methods to quantify uncertainty and variability in human PBPK model predictions for use in risk assessments requires prior distributions (priors), which characterize what is known or believed about parameters’ values before observing in vivo data. Expe...
Using Bayesian statistical methods to quantify uncertainty and variability in human physiologically-based pharmacokinetic (PBPK) model predictions for use in risk assessments requires prior distributions (priors), which characterize what is known or believed about parameters’ val...
NASA Astrophysics Data System (ADS)
Herschtal, A.; Foroudi, F.; Greer, P. B.; Eade, T. N.; Hindson, B. R.; Kron, T.
2012-05-01
Early approaches to characterizing errors in target displacement during a fractionated course of radiotherapy assumed that the underlying fraction-to-fraction variability in target displacement, known as the ‘treatment error’ or ‘random error’, could be regarded as constant across patients. More recent approaches have modelled target displacement allowing for differences in random error between patients. However, until recently it has not been feasible to compare the goodness of fit of alternate models of random error rigorously. This is because the large volumes of real patient data necessary to distinguish between alternative models have only very recently become available. This work uses real-world displacement data collected from 365 patients undergoing radical radiotherapy for prostate cancer to compare five candidate models for target displacement. The simplest model assumes constant random errors across patients, while other models allow for random errors that vary according to one of several candidate distributions. Bayesian statistics and Markov Chain Monte Carlo simulation of the model parameters are used to compare model goodness of fit. We conclude that modelling the random error as inverse gamma distributed provides a clearly superior fit over all alternatives considered. This finding can facilitate more accurate margin recipes and correction strategies.
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
Bayesian model-based approach for developing a river water quality index
NASA Astrophysics Data System (ADS)
Ali, Zalina Mohd; Ibrahim, Noor Akma; Mengersen, Kerrie; Shitan, Mahendran; Juahir, Hafizan
2014-09-01
Six main pollutants have been previously identified by expert opinion to determine river condition in Malaysia. The pollutants were Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), Suspended Solid (SS), potential of Hydrogen (pH) and Ammonia (AN). The selected variables together with the respective weights have been applied to calculate the water quality index of all rivers in Malaysia. However, the relative weights established in DOE-WQI formula are subjective in nature and not unanimously agreed upon, as indicated by different weight being proposed for the same variables by various panels of experts. Focusing on the Langat River, a Bayesian model-based approach was introduced for the first time in this study to obtain new objective relative weights. The new weights used in WQI calculation are shown to be capable of capturing similar distributions in water quality compared with the existing DOE-WQI.
Ganjali, Mojtaba; Baghfalaki, Taban; Berridge, Damon
2015-01-01
In this paper, the problem of identifying differentially expressed genes under different conditions using gene expression microarray data, in the presence of outliers, is discussed. For this purpose, the robust modeling of gene expression data using some powerful distributions known as normal/independent distributions is considered. These distributions include the Student's t and normal distributions which have been used previously, but also include extensions such as the slash, the contaminated normal and the Laplace distributions. The purpose of this paper is to identify differentially expressed genes by considering these distributional assumptions instead of the normal distribution. A Bayesian approach using the Markov Chain Monte Carlo method is adopted for parameter estimation. Two publicly available gene expression data sets are analyzed using the proposed approach. The use of the robust models for detecting differentially expressed genes is investigated. This investigation shows that the choice of model for differentiating gene expression data is very important. This is due to the small number of replicates for each gene and the existence of outlying data. Comparison of the performance of these models is made using different statistical criteria and the ROC curve. The method is illustrated using some simulation studies. We demonstrate the flexibility of these robust models in identifying differentially expressed genes. PMID:25910040
Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach
Mohammadi, Tayeb; Sedehi, Morteza
2016-01-01
Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493
Craig, B A; Fryback, D G; Klein, R; Klein, B E
1999-06-15
To assess the costs and benefits of screening and treatment strategies, it is important to know what would have happened had there been no intervention. In today's ethical climate, however, it is almost impossible to observe this directly and therefore must be inferred from observations with intervention. In this paper, we illustrate a Bayesian approach to this situation when the observations are at separated and unequally spaced time points and the time of intervention is interval censored. We develop a discrete-time Markov model which combines a non-homogeneous Markov chain, used to model the natural progression, with mechanisms that describe the possibility of both treatment intervention and death. We apply this approach to a subpopulation of the Wisconsin Epidemiologic Study of Diabetic Retinopathy, a population-based cohort study to investigate prevalence, incidence, and progression of diabetic retinopathy. In addition, posterior predictive distributions are discussed as a prognostic tool to assist researchers in evaluating costs and benefits of treatment protocols. While we focus this approach on diabetic retinopathy cohort data, we believe this methodology can have wide application. PMID:10399201
Nodal predictive error model and Bayesian approach for thermal diffusivity and heat source mapping
NASA Astrophysics Data System (ADS)
Massard, H.; Fudym, Olivier; Orlande, H. R. B.; Batsale, J. C.
2010-07-01
This article aims at solving a two-dimensional inverse heat conduction problem in order to retrieve both the thermal diffusivity and heat source field in a thin plate. A spatial random heat pulse is applied to the plate and the thermal response is analysed. The inverse approach is based on the minimisation of a nodal predictive error model, which yields a linear estimation problem. As a result of this approach, the sensitivity matrix is directly filled with experimental data, and thus is partially noisy. Bayesian estimators, such as the Maximum A Posteriori and a Markov Chain Monte Carlo approach (Metropolis-Hastings), are implemented and compared with the Ordinary Least Squares solution. Simulated temperature measurements are used in the inverse analysis. The nodal strategy relies on the availability of temperature measurements with fine spatial resolution and high frequency, typical of nowadays infrared cameras. The effects of both the measurement errors and of the model errors on the inverse problem solution are also analysed.
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach.
Saa, Pedro A; Nielsen, Lars K
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
ERIC Educational Resources Information Center
Tchumtchoua, Sylvie; Dey, Dipak K.
2012-01-01
This paper proposes a semiparametric Bayesian framework for the analysis of associations among multivariate longitudinal categorical variables in high-dimensional data settings. This type of data is frequent, especially in the social and behavioral sciences. A semiparametric hierarchical factor analysis model is developed in which the…
Hu, Yi; Ward, Michael P; Xia, Congcong; Li, Rui; Sun, Liqian; Lynn, Henry; Gao, Fenghua; Wang, Qizhi; Zhang, Shiqing; Xiong, Chenglong; Zhang, Zhijie; Jiang, Qingwu
2016-01-01
Schistosomiasis remains a major public health problem and causes substantial economic impact in east China, particularly along the Yangtze River Basin. Disease forecasting and surveillance can assist in the development and implementation of more effective intervention measures to control disease. In this study, we applied a Bayesian hierarchical spatio-temporal model to describe trends in schistosomiasis risk in Anhui Province, China, using annual parasitological and environmental data for the period 1997-2010. A computationally efficient approach-Integrated Nested Laplace Approximation-was used for model inference. A zero-inflated, negative binomial model best described the spatio-temporal dynamics of schistosomiasis risk. It predicted that the disease risk would generally be low and stable except for some specific, local areas during the period 2011-2014. High-risk counties were identified in the forecasting maps: three in which the risk remained high, and two in which risk would become high. The results indicated that schistosomiasis risk has been reduced to consistently low levels throughout much of this region of China; however, some counties were identified in which progress in schistosomiasis control was less than satisfactory. Whilst maintaining overall control, specific interventions in the future should focus on these refractive counties as part of a strategy to eliminate schistosomiasis from this region.
Hu, Yi; Ward, Michael P; Xia, Congcong; Li, Rui; Sun, Liqian; Lynn, Henry; Gao, Fenghua; Wang, Qizhi; Zhang, Shiqing; Xiong, Chenglong; Zhang, Zhijie; Jiang, Qingwu
2016-01-01
Schistosomiasis remains a major public health problem and causes substantial economic impact in east China, particularly along the Yangtze River Basin. Disease forecasting and surveillance can assist in the development and implementation of more effective intervention measures to control disease. In this study, we applied a Bayesian hierarchical spatio-temporal model to describe trends in schistosomiasis risk in Anhui Province, China, using annual parasitological and environmental data for the period 1997-2010. A computationally efficient approach-Integrated Nested Laplace Approximation-was used for model inference. A zero-inflated, negative binomial model best described the spatio-temporal dynamics of schistosomiasis risk. It predicted that the disease risk would generally be low and stable except for some specific, local areas during the period 2011-2014. High-risk counties were identified in the forecasting maps: three in which the risk remained high, and two in which risk would become high. The results indicated that schistosomiasis risk has been reduced to consistently low levels throughout much of this region of China; however, some counties were identified in which progress in schistosomiasis control was less than satisfactory. Whilst maintaining overall control, specific interventions in the future should focus on these refractive counties as part of a strategy to eliminate schistosomiasis from this region. PMID:27053447
Strauss, Jillian; Miranda-Moreno, Luis F; Morency, Patrick
2013-10-01
This study proposes a two-equation Bayesian modelling approach to simultaneously study cyclist injury occurrence and bicycle activity at signalized intersections as joint outcomes. This approach deals with the potential presence of endogeneity and unobserved heterogeneities and is used to identify factors associated with both cyclist injuries and volumes. Its application to identify high-risk corridors is also illustrated. Montreal, Quebec, Canada is the application environment, using an extensive inventory of a large sample of signalized intersections containing disaggregate motor-vehicle traffic volumes and bicycle flows, geometric design, traffic control and built environment characteristics in the vicinity of the intersections. Cyclist injury data for the period of 2003-2008 is used in this study. Also, manual bicycle counts were standardized using temporal and weather adjustment factors to obtain average annual daily volumes. Results confirm and quantify the effects of both bicycle and motor-vehicle flows on cyclist injury occurrence. Accordingly, more cyclists at an intersection translate into more cyclist injuries but lower injury rates due to the non-linear association between bicycle volume and injury occurrence. Furthermore, the results emphasize the importance of turning motor-vehicle movements. The presence of bus stops and total crosswalk length increase cyclist injury occurrence whereas the presence of a raised median has the opposite effect. Bicycle activity through intersections was found to increase as employment, number of metro stations, land use mix, area of commercial land use type, length of bicycle facilities and the presence of schools within 50-800 m of the intersection increase. Intersections with three approaches are expected to have fewer cyclists than those with four. Using Bayesian analysis, expected injury frequency and injury rates were estimated for each intersection and used to rank corridors. Corridors with high bicycle volumes
2008-01-01
Background Marine allopatric speciation is an enigma because pelagic larval dispersal can potentially connect disjunct populations thereby preventing reproductive and morphological divergence. Here we present a new hierarchical approximate Bayesian computation model (HABC) that tests two hypotheses of marine allopatric speciation: 1.) "soft vicariance", where a speciation involves fragmentation of a large widespread ancestral species range that was previously connected by long distance gene flow; and 2.) peripatric colonization, where speciations in peripheral archipelagos emerge from sweepstakes colonizations from central source regions. The HABC approach analyzes all the phylogeographic datasets at once in order to make across taxon-pair inferences about biogeographic processes while explicitly allowing for uncertainty in the demographic differences within each taxon-pair. Our method uses comparative phylogeographic data that consists of single locus mtDNA sequences from multiple co-distributed taxa containing pairs of central and peripheral populations. We use the method on two comparative phylogeographic data sets consisting of cowrie gastropod endemics co-distributed in the Hawaiian (11 taxon-pairs) and Marquesan archipelagos (7 taxon-pairs). Results Given the Marquesan data, we find strong evidence of simultaneous colonization across all seven cowrie gastropod endemics co-distributed in the Marquesas. In contrast, the lower sample sizes in the Hawaiian data lead to greater uncertainty associated with the Hawaiian estimates. Although, the hyper-parameter estimates point to soft vicariance in a subset of the 11 Hawaiian taxon-pairs, the hyper-prior and hyper-posterior are too similar to make a definitive conclusion. Both results are not inconsistent with what is known about the geologic history of the archipelagos. Simulations verify that our method can successfully distinguish these two histories across a wide range of conditions given sufficient sampling
Thomsen, Nanna I; Binning, Philip J; McKnight, Ursula S; Tuxen, Nina; Bjerg, Poul L; Troldborg, Mads
2016-05-01
A key component in risk assessment of contaminated sites is in the formulation of a conceptual site model (CSM). A CSM is a simplified representation of reality and forms the basis for the mathematical modeling of contaminant fate and transport at the site. The CSM should therefore identify the most important site-specific features and processes that may affect the contaminant transport behavior at the site. However, the development of a CSM will always be associated with uncertainties due to limited data and lack of understanding of the site conditions. CSM uncertainty is often found to be a major source of model error and it should therefore be accounted for when evaluating uncertainties in risk assessments. We present a Bayesian belief network (BBN) approach for constructing CSMs and assessing their uncertainty at contaminated sites. BBNs are graphical probabilistic models that are effective for integrating quantitative and qualitative information, and thus can strengthen decisions when empirical data are lacking. The proposed BBN approach facilitates a systematic construction of multiple CSMs, and then determines the belief in each CSM using a variety of data types and/or expert opinion at different knowledge levels. The developed BBNs combine data from desktop studies and initial site investigations with expert opinion to assess which of the CSMs are more likely to reflect the actual site conditions. The method is demonstrated on a Danish field site, contaminated with chlorinated ethenes. Four different CSMs are developed by combining two contaminant source zone interpretations (presence or absence of a separate phase contamination) and two geological interpretations (fractured or unfractured clay till). The beliefs in each of the CSMs are assessed sequentially based on data from three investigation stages (a screening investigation, a more detailed investigation, and an expert consultation) to demonstrate that the belief can be updated as more information
Thomsen, Nanna I; Binning, Philip J; McKnight, Ursula S; Tuxen, Nina; Bjerg, Poul L; Troldborg, Mads
2016-05-01
A key component in risk assessment of contaminated sites is in the formulation of a conceptual site model (CSM). A CSM is a simplified representation of reality and forms the basis for the mathematical modeling of contaminant fate and transport at the site. The CSM should therefore identify the most important site-specific features and processes that may affect the contaminant transport behavior at the site. However, the development of a CSM will always be associated with uncertainties due to limited data and lack of understanding of the site conditions. CSM uncertainty is often found to be a major source of model error and it should therefore be accounted for when evaluating uncertainties in risk assessments. We present a Bayesian belief network (BBN) approach for constructing CSMs and assessing their uncertainty at contaminated sites. BBNs are graphical probabilistic models that are effective for integrating quantitative and qualitative information, and thus can strengthen decisions when empirical data are lacking. The proposed BBN approach facilitates a systematic construction of multiple CSMs, and then determines the belief in each CSM using a variety of data types and/or expert opinion at different knowledge levels. The developed BBNs combine data from desktop studies and initial site investigations with expert opinion to assess which of the CSMs are more likely to reflect the actual site conditions. The method is demonstrated on a Danish field site, contaminated with chlorinated ethenes. Four different CSMs are developed by combining two contaminant source zone interpretations (presence or absence of a separate phase contamination) and two geological interpretations (fractured or unfractured clay till). The beliefs in each of the CSMs are assessed sequentially based on data from three investigation stages (a screening investigation, a more detailed investigation, and an expert consultation) to demonstrate that the belief can be updated as more information
NASA Astrophysics Data System (ADS)
Thomsen, Nanna I.; Binning, Philip J.; McKnight, Ursula S.; Tuxen, Nina; Bjerg, Poul L.; Troldborg, Mads
2016-05-01
A key component in risk assessment of contaminated sites is in the formulation of a conceptual site model (CSM). A CSM is a simplified representation of reality and forms the basis for the mathematical modeling of contaminant fate and transport at the site. The CSM should therefore identify the most important site-specific features and processes that may affect the contaminant transport behavior at the site. However, the development of a CSM will always be associated with uncertainties due to limited data and lack of understanding of the site conditions. CSM uncertainty is often found to be a major source of model error and it should therefore be accounted for when evaluating uncertainties in risk assessments. We present a Bayesian belief network (BBN) approach for constructing CSMs and assessing their uncertainty at contaminated sites. BBNs are graphical probabilistic models that are effective for integrating quantitative and qualitative information, and thus can strengthen decisions when empirical data are lacking. The proposed BBN approach facilitates a systematic construction of multiple CSMs, and then determines the belief in each CSM using a variety of data types and/or expert opinion at different knowledge levels. The developed BBNs combine data from desktop studies and initial site investigations with expert opinion to assess which of the CSMs are more likely to reflect the actual site conditions. The method is demonstrated on a Danish field site, contaminated with chlorinated ethenes. Four different CSMs are developed by combining two contaminant source zone interpretations (presence or absence of a separate phase contamination) and two geological interpretations (fractured or unfractured clay till). The beliefs in each of the CSMs are assessed sequentially based on data from three investigation stages (a screening investigation, a more detailed investigation, and an expert consultation) to demonstrate that the belief can be updated as more information
A Nonparametric Bayesian Approach to Seismic Hazard Modeling Using the ETAS Framework
NASA Astrophysics Data System (ADS)
Ross, G.
2015-12-01
The epidemic-type aftershock sequence (ETAS) model is one of the most popular tools for modeling seismicity and quantifying risk in earthquake-prone regions. Under the ETAS model, the occurrence times of earthquakes are treated as a self-exciting Poisson process where each earthquake briefly increases the probability of subsequent earthquakes occurring soon afterwards, which captures the fact that large mainshocks tend to produce long sequences of aftershocks. A triggering kernel controls the amount by which the probability increases based on the magnitude of each earthquake, and the rate at which it then decays over time. This triggering kernel is usually chosen heuristically, to match the parametric form of the modified Omori law for aftershock decay. However recent work has questioned whether this is an appropriate choice. Since the choice of kernel has a large impact on the predictions made by the ETAS model, avoiding misspecification is crucially important. We present a novel nonparametric version of ETAS which avoids making parametric assumptions, and instead learns the correct specification from the data itself. Our approach is based on the Dirichlet process, which is a modern class of Bayesian prior distribution which allows for efficient inference over an infinite dimensional space of functions. We show how our nonparametric ETAS model can be fit to data, and present results demonstrating that the fit is greatly improved compared to the standard parametric specification. Additionally, we explain how our model can be used to perform probabilistic declustering of earthquake catalogs, to classify earthquakes as being either aftershocks or mainshocks. and to learn the causal relations between pairs of earthquakes.
A Bayesian Nonparametric Approach to Test Equating
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2009-01-01
A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…
Smania, G; Baiardi, P; Ceci, A; Cella, M; Magni, P
2016-08-01
This study presents a pharmacokinetic-pharmacodynamic based clinical trial simulation framework for evaluating the performance of a fixed-sample Bayesian design (BD) and two alternative Bayesian sequential designs (BSDs) (i.e., a non-hierarchical (NON-H) and a semi-hierarchical (SEMI-H) one). Prior information was elicited from adult trials and weighted based on the expected similarity of response to treatment between the pediatric and adult populations. Study designs were evaluated in terms of: type I and II errors, sample size per arm (SS), trial duration (TD), and estimate precision. No substantial differences were observed between NON-H and SEMI-H. BSDs require, on average, smaller SS and TD compared to the BD, which, on the other hand, guarantees higher estimate precision. When large differences between children and adults are expected, BSDs can return very large SS. Bayesian approaches appear to outperform their frequentist counterparts in the design of pediatric trials even when little weight is given to prior information from adults. PMID:27530374
Gracia, Enrique; López-Quílez, Antonio; Marco, Miriam; Lladosa, Silvia; Lila, Marisol
2014-01-01
This paper uses spatial data of cases of intimate partner violence against women (IPVAW) to examine neighborhood-level influences on small-area variations in IPVAW risk in a police district of the city of Valencia (Spain). To analyze area variations in IPVAW risk and its association with neighborhood-level explanatory variables we use a Bayesian spatial random-effects modeling approach, as well as disease mapping methods to represent risk probabilities in each area. Analyses show that IPVAW cases are more likely in areas of high immigrant concentration, high public disorder and crime, and high physical disorder. Results also show a spatial component indicating remaining variability attributable to spatially structured random effects. Bayesian spatial modeling offers a new perspective to identify IPVAW high and low risk areas, and provides a new avenue for the design of better-informed prevention and intervention strategies. PMID:24413701
Bayesian kinematic earthquake source models
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
Bayesian Data-Model Fit Assessment for Structural Equation Modeling
ERIC Educational Resources Information Center
Levy, Roy
2011-01-01
Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…
Analysis of housing price by means of STAR models with neighbourhood effects: a Bayesian approach
NASA Astrophysics Data System (ADS)
Beamonte, Asuncion; Gargallo, Pilar; Salvador, Manuel
2010-06-01
In this paper, we extend the Bayesian methodology introduced by Beamonte et al. (Stat Modelling 8:285-311, 2008) for the estimation and comparison of spatio-temporal autoregressive models (STAR) with neighbourhood effects, providing a more general treatment that uses larger and denser nets for the number of spatial and temporal influential neighbours and continuous distributions for their smoothing weights. This new treatment also reduces the computational time and the RAM necessities of the estimation algorithm in Beamonte et al. (Stat Modelling 8:285-311, 2008). The procedure is illustrated by an application to the Zaragoza (Spain) real estate market, improving the goodness of fit and the outsampling behaviour of the model thanks to a more flexible estimation of the neighbourhood parameters.
Kercel, S.W.
1999-11-07
For several reasons, Bayesian parameter estimation is superior to other methods for inductively learning a model for an anticipatory system. Since it exploits prior knowledge, the analysis begins from a more advantageous starting point than other methods. Also, since "nuisance parameters" can be removed from the Bayesian analysis, the description of the model need not be as complete as is necessary for such methods as matched filtering. In the limit of perfectly random noise and a perfect description of the model, the signal-to-noise ratio improves as the square root of the number of samples in the data. Even with the imperfections of real-world data, Bayesian methods approach this ideal limit of performance more closely than other methods. These capabilities provide a strategy for addressing a major unsolved problem in pump operation: the identification of precursors of cavitation. Cavitation causes immediate degradation of pump performance and ultimate destruction of the pump. However, the most efficient point to operate a pump is just below the threshold of cavitation. It might be hoped that a straightforward method to minimize pump cavitation damage would be to simply adjust the operating point until the inception of cavitation is detected and then to slightly readjust the operating point to let the cavitation vanish. However, due to the continuously evolving state of the fluid moving through the pump, the threshold of cavitation tends to wander. What is needed is to anticipate cavitation, and this requires the detection and identification of precursor features that occur just before cavitation starts.
Gas turbine engine prognostics using Bayesian hierarchical models: A variational approach
NASA Astrophysics Data System (ADS)
Zaidan, Martha A.; Mills, Andrew R.; Harrison, Robert F.; Fleming, Peter J.
2016-03-01
Prognostics is an emerging requirement of modern health monitoring that aims to increase the fidelity of failure-time predictions by the appropriate use of sensory and reliability information. In the aerospace industry it is a key technology to reduce life-cycle costs, improve reliability and asset availability for a diverse fleet of gas turbine engines. In this work, a Bayesian hierarchical model is selected to utilise fleet data from multiple assets to perform probabilistic estimation of remaining useful life (RUL) for civil aerospace gas turbine engines. The hierarchical formulation allows Bayesian updates of an individual predictive model to be made, based upon data received asynchronously from a fleet of assets with different in-service lives and for the entry of new assets into the fleet. In this paper, variational inference is applied to the hierarchical formulation to overcome the computational and convergence concerns that are raised by the numerical sampling techniques needed for inference in the original formulation. The algorithm is tested on synthetic data, where the quality of approximation is shown to be satisfactory with respect to prediction performance, computational speed, and ease of use. A case study of in-service gas turbine engine data demonstrates the value of integrating fleet data for accurately predicting degradation trajectories of assets.
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-11-15
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
A Bayesian approach to a logistic regression model with incomplete information.
Choi, Taeryon; Schervish, Mark J; Schmitt, Ketra A; Small, Mitchell J
2008-06-01
We consider a set of independent Bernoulli trials with possibly different success probabilities that depend on covariate values. However, the available data consist only of aggregate numbers of successes among subsets of the trials along with all of the covariate values. We still wish to estimate the parameters of a modeled relationship between the covariates and the success probabilities, e.g., a logistic regression model. In this article, estimation of the parameters is made from a Bayesian perspective by using a Markov chain Monte Carlo algorithm based only on the available data. The proposed methodology is applied to both simulation studies and real data from a dose-response study of a toxic chemical, perchlorate.
NASA Astrophysics Data System (ADS)
Cha, YoonKyung; Soon Park, Seok; Won Lee, Hye; Stow, Craig A.
2016-01-01
Modeling to accurately predict river phytoplankton distribution and abundance is important in water quality and resource management. Nevertheless, the complex nature of eutrophication processes in highly connected river systems makes the task challenging. To model dynamics of river phytoplankton, represented by chlorophyll a (Chl a) concentration, we propose a Bayesian hierarchical model that explicitly accommodates seasonality and upstream-downstream spatial gradient in the structure. The utility of our model is demonstrated with an application to the Nakdong River (South Korea), which is a eutrophic, intensively regulated river, but functions as an irreplaceable water source for more than 13 million people. Chl a is modeled with two manageable factors, river flow, and total phosphorus (TP) concentration. Our model results highlight the importance of taking seasonal and spatial context into account when describing flow regimes and phosphorus delivery in rivers. A contrasting positive Chl a-flow relationship across stations versus negative Chl a-flow slopes that arose when Chl a was modeled on a station-month basis is an illustration of Simpson's paradox, which necessitates modeling Chl a-flow relationships decomposed into seasonal and spatial components. Similar Chl a-TP slopes among stations and months suggest that, with the flow effect removed, positive TP effects on Chl a are uniform regardless of the season and station in the river. Our model prediction successfully captured the shift in the spatial and monthly patterns of Chl a.
A Bayesian network modeling approach to forecasting the 21st century worldwide status of polar bears
NASA Astrophysics Data System (ADS)
Amstrup, Steven C.; Marcot, Bruce G.; Douglas, David C.
To inform the U.S. Fish and Wildlife Service decision, whether or not to list polar bears as threatened under the Endangered Species Act (ESA), we projected the status of the world's polar bears (Ursus maritimus) for decades centered on future years 2025, 2050, 2075, and 2095. We defined four ecoregions based on current and projected sea ice conditions: seasonal ice, Canadian Archipelago, polar basin divergent, and polar basin convergent ecoregions. We incorporated general circulation model projections of future sea ice into a Bayesian network (BN) model structured around the factors considered in ESA decisions. This first-generation BN model combined empirical data, interpretations of data, and professional judgments of one polar bear expert into a probabilistic framework that identifies causal links between environmental stressors and polar bear responses. We provide guidance regarding steps necessary to refine the model, including adding inputs from other experts. The BN model projected extirpation of polar bears from the seasonal ice and polar basin divergent ecoregions, where ≈2/3 of the world's polar bears currently occur, by mid century. Projections were less dire in other ecoregions. Decline in ice habitat was the overriding factor driving the model outcomes. Although this is a first-generation model, the dependence of polar bears on sea ice is universally accepted, and the observed sea ice decline is faster than models suggest. Therefore, incorporating judgments of multiple experts in a final model is not expected to fundamentally alter the outlook for polar bears described here.
NASA Astrophysics Data System (ADS)
Humphrey, Greer B.; Gibbs, Matthew S.; Dandy, Graeme C.; Maier, Holger R.
2016-09-01
Monthly streamflow forecasts are needed to support water resources decision making in the South East of South Australia, where baseflow represents a significant proportion of the total streamflow and soil moisture and groundwater are important predictors of runoff. To address this requirement, the utility of a hybrid monthly streamflow forecasting approach is explored, whereby simulated soil moisture from the GR4J conceptual rainfall-runoff model is used to represent initial catchment conditions in a Bayesian artificial neural network (ANN) statistical forecasting model. To assess the performance of this hybrid forecasting method, a comparison is undertaken of the relative performances of the Bayesian ANN, the GR4J conceptual model and the hybrid streamflow forecasting approach for producing 1-month ahead streamflow forecasts at three key locations in the South East of South Australia. Particular attention is paid to the quantification of uncertainty in each of the forecast models and the potential for reducing forecast uncertainty by using the hybrid approach is considered. Case study results suggest that the hybrid models developed in this study are able to take advantage of the complementary strengths of both the ANN models and the GR4J conceptual models. This was particularly the case when forecasting high flows, where the hybrid models were shown to outperform the two individual modelling approaches in terms of the accuracy of the median forecasts, as well as reliability and resolution of the forecast distributions. In addition, the forecast distributions generated by the hybrid models were up to 8 times more precise than those based on climatology; thus, providing a significant improvement on the information currently available to decision makers.
The approach of Bayesian model indicates media awareness of medical errors
NASA Astrophysics Data System (ADS)
Ravichandran, K.; Arulchelvan, S.
2016-06-01
This research study brings out the factors behind the increase in medical malpractices in the Indian subcontinent in the present day environment and impacts of television media awareness towards it. Increased media reporting of medical malpractices and errors lead to hospitals taking corrective action and improve the quality of medical services that they provide. The model of Cultivation Theory can be used to measure the influence of media in creating awareness of medical errors. The patient's perceptions of various errors rendered by the medical industry from different parts of India were taken up for this study. Bayesian method was used for data analysis and it gives absolute values to indicate satisfaction of the recommended values. To find out the impact of maintaining medical records of a family online by the family doctor in reducing medical malpractices which creates the importance of service quality in medical industry through the ICT.
Koukounari, Artemis; Estambale, Benson B A; Njagi, J Kiambo; Cundill, Bonnie; Ajanga, Anthony; Crudder, Christopher; Otido, Julius; Jukes, Matthew C H; Clarke, Siân E; Brooker, Simon
2008-12-01
Anaemia is multi-factorial in origin and disentangling its aetiology remains problematic, with surprisingly few studies investigating the relative contribution of different parasitic infections to anaemia amongst schoolchildren. We report cross-sectional data on haemoglobin, malaria parasitaemia, helminth infection and undernutrition among 1523 schoolchildren enrolled in classes 5 and 6 (aged 10-21 years) in 30 primary schools in western Kenya. Bayesian hierarchical modelling was used to investigate putative relationships. Children infected with Plasmodium falciparum or with a heavy Schistosoma mansoni infection, stunted children and girls were found to have lower haemoglobin concentrations. Children heavily infected with S. mansoni were also more likely to be anaemic compared with uninfected children. This study further highlights the importance of malaria and intestinal schistosomiasis as contributors to reduced haemoglobin levels among schoolchildren and helps guide the implementation of integrated school health programmes in areas of differing parasite transmission.
Koukounari, Artemis; Estambale, Benson B.A.; Kiambo Njagi, J.; Cundill, Bonnie; Ajanga, Anthony; Crudder, Christopher; Otido, Julius; Jukes, Matthew C.H.; Clarke, Siân E.; Brooker, Simon
2008-01-01
Anaemia is multi-factorial in origin and disentangling its aetiology remains problematic, with surprisingly few studies investigating the relative contribution of different parasitic infections to anaemia amongst schoolchildren. We report cross-sectional data on haemoglobin, malaria parasitaemia, helminth infection and undernutrition among 1523 schoolchildren enrolled in classes 5 and 6 (aged 10–21 years) in 30 primary schools in western Kenya. Bayesian hierarchical modelling was used to investigate putative relationships. Children infected with Plasmodium falciparum or with a heavy Schistosoma mansoni infection, stunted children and girls were found to have lower haemoglobin concentrations. Children heavily infected with S. mansoni were also more likely to be anaemic compared with uninfected children. This study further highlights the importance of malaria and intestinal schistosomiasis as contributors to reduced haemoglobin levels among schoolchildren and helps guide the implementation of integrated school health programmes in areas of differing parasite transmission. PMID:18621051
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
NASA Astrophysics Data System (ADS)
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
Bayesian Networks for Social Modeling
Whitney, Paul D.; White, Amanda M.; Walsh, Stephen J.; Dalton, Angela C.; Brothers, Alan J.
2011-03-28
This paper describes a body of work developed over the past five years. The work addresses the use of Bayesian network (BN) models for representing and predicting social/organizational behaviors. The topics covered include model construction, validation, and use. These topics show the bulk of the lifetime of such model, beginning with construction, moving to validation and other aspects of model ‘critiquing’, and finally demonstrating how the modeling approach might be used to inform policy analysis. To conclude, we discuss limitations of using BN for this activity and suggest remedies to address those limitations. The primary benefits of using a well-developed computational, mathematical, and statistical modeling structure, such as BN, are 1) there are significant computational, theoretical and capability bases on which to build 2) ability to empirically critique the model, and potentially evaluate competing models for a social/behavioral phenomena.
A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods
ERIC Educational Resources Information Center
Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan
2008-01-01
This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…
Reducing model structural uncertainty in predictions for ungauged basins via Bayesian approach.
NASA Astrophysics Data System (ADS)
Prieto, Cristina; Le Vine, Nataliya; Vitolo, Claudia; García, Eduardo; Medina, Raúl
2016-04-01
A catchment is a complex system where a multitude of interrelated energy, water and vegetation processes occur at different temporal and spatial scales. A rainfall-runoff model is a simplified representation of the system, and serves as a hypothesis about an inner catchment working. In predictions for ungauged basins, a common practice is to use a pre-selected assumed-to-be-perfect model structure to represent all catchments under analysis. However, it is unlikely that the same model structure is appropriate for diverse catchments due to the 'uniqueness of the place'. At the same time, there is no obvious justification to select a single model structure as a suitable description of the system. The contribution of this research is a move forward in the 'one size fits all' problem for predicting flows in ungauged basins. We present a statistical methodology, which allows regionalization that considers the information given by different hydrological model structures. First, the information to be regionalised is compactly represented via Principal Component Analysis. Second, the most significant principal components are regionalised using non-linear regionalisation method based on Random Forests. Third, a regionalisation error structure is derived based on the gauged catchments to be used in the Bayesian condition of the rainfall-runoff structures and their parameters. The methodological developments are demonstrated for predicting flows in ungauged basins of Northern Spain; and the results show that the methodology allows improving the flow prediction.
Bayesian Uncertainty Analyses Via Deterministic Model
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.
2001-05-01
Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.
NASA Astrophysics Data System (ADS)
Hopcroft, Peter O.; Gallagher, Kerry; Pain, Christopher C.
2009-08-01
Collections of suitably chosen borehole profiles can be used to infer large-scale trends in ground-surface temperature (GST) histories for the past few hundred years. These reconstructions are based on a large database of carefully selected borehole temperature measurements from around the globe. Since non-climatic thermal influences are difficult to identify, representative temperature histories are derived by averaging individual reconstructions to minimize the influence of these perturbing factors. This may lead to three potentially important drawbacks: the net signal of non-climatic factors may not be zero, meaning that the average does not reflect the best estimate of past climate; the averaging over large areas restricts the useful amount of more local climate change information available; and the inversion methods used to reconstruct the past temperatures at each site must be mathematically identical and are therefore not necessarily best suited to all data sets. In this work, we avoid these issues by using a Bayesian partition model (BPM), which is computed using a trans-dimensional form of a Markov chain Monte Carlo algorithm. This then allows the number and spatial distribution of different GST histories to be inferred from a given set of borehole data by partitioning the geographical area into discrete partitions. Profiles that are heavily influenced by non-climatic factors will be partitioned separately. Conversely, profiles with climatic information, which is consistent with neighbouring profiles, will then be inferred to lie in the same partition. The geographical extent of these partitions then leads to information on the regional extent of the climatic signal. In this study, three case studies are described using synthetic and real data. The first demonstrates that the Bayesian partition model method is able to correctly partition a suite of synthetic profiles according to the inferred GST history. In the second, more realistic case, a series of
Hu, Yi; Ward, Michael P.; Xia, Congcong; Li, Rui; Sun, Liqian; Lynn, Henry; Gao, Fenghua; Wang, Qizhi; Zhang, Shiqing; Xiong, Chenglong; Zhang, Zhijie; Jiang, Qingwu
2016-01-01
Schistosomiasis remains a major public health problem and causes substantial economic impact in east China, particularly along the Yangtze River Basin. Disease forecasting and surveillance can assist in the development and implementation of more effective intervention measures to control disease. In this study, we applied a Bayesian hierarchical spatio-temporal model to describe trends in schistosomiasis risk in Anhui Province, China, using annual parasitological and environmental data for the period 1997–2010. A computationally efficient approach–Integrated Nested Laplace Approximation–was used for model inference. A zero-inflated, negative binomial model best described the spatio-temporal dynamics of schistosomiasis risk. It predicted that the disease risk would generally be low and stable except for some specific, local areas during the period 2011–2014. High-risk counties were identified in the forecasting maps: three in which the risk remained high, and two in which risk would become high. The results indicated that schistosomiasis risk has been reduced to consistently low levels throughout much of this region of China; however, some counties were identified in which progress in schistosomiasis control was less than satisfactory. Whilst maintaining overall control, specific interventions in the future should focus on these refractive counties as part of a strategy to eliminate schistosomiasis from this region. PMID:27053447
Awate, Suyash P; Radhakrishnan, Thyagarajan
2015-01-01
In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art.
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Hughes, J. D.; Chen, J.; Dutta, D.; Vaze, J.
2014-12-01
Achieving predictive success is a major challenge in hydrological modelling. Predictive metrics indicate whether models and parameters are appropriate for impact assessment, design, planning and management, forecasting and underpinning policy. It is often found that very different parameter sets and model structures are equally acceptable system representations (commonly described as equifinality). Furthermore, parameters that produce the best goodness of fit during a calibration period may often yield poor results outside of that period. A calibration method is presented that uses a recursive Bayesian filter to estimate the probability of consistent performance of parameter sets in different sub-periods. The result is a probability distribution for each specified performance interval. This generic method utilises more information within time-series data than what is typically used for calibrations, and could be adopted for different types of time-series modelling applications. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The proposed calibration method, therefore, can be used to avoid heavy weighting toward rare periods of good agreement. The method is trialled in a conceptual river system model called the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested via cross-validation and results are compared to a traditional split-sample calibration/validation to evaluate the new technique's ability to predict daily streamflow. The results showed that the new calibration method could produce parameterisations that performed better in validation periods than optimum calibration parameter sets. The method shows ability to improve on predictive performance and provide more realistic flux terms compared to traditional split-sample calibration methods.
Bayesian approach to avoiding track seduction
NASA Astrophysics Data System (ADS)
Salmond, David J.; Everett, Nicholas O.
2002-08-01
The problem of maintaining track on a primary target in the presence spurious objects is addressed. Recursive and batch filtering approaches are developed. For the recursive approach, a Bayesian track splitting filter is derived which spawns candidate tracks if there is a possibility of measurement misassociation. The filter evaluates the probability of each candidate track being associated with the primary target. The batch filter is a Markov-chain Monte Carlo (MCMC) algorithm which fits the observed data sequence to models of target dynamics and measurement-track association. Simulation results are presented.
A Bayesian approach to modeling 2D gravity data using polygon states
NASA Astrophysics Data System (ADS)
Titus, W. J.; Titus, S.; Davis, J. R.
2015-12-01
We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.
Cha, YoonKyung; Kim, Young Mo; Choi, Jae-Woo; Sthiannopkao, Suthipong; Cho, Kyung Hwa
2016-01-01
In the Mekong River basin, groundwater from tube-wells is a major drinking water source. However, arsenic (As) contamination in groundwater resources has become a critical issue in the watershed. In this study, As species such as total As (AsTOT), As(III), and As(V), were monitored across the watershed to investigate their characteristics and inter-relationships with water quality parameters, including pH and redox potential (Eh). The data illustrated a dramatic change in the relationship between AsTOT and Eh over a specific Eh range, suggesting the importance of Eh in predicting AsTOT. Thus, a Bayesian change-point model was developed to predict AsTOT concentrations based on Eh and pH, to determine changes in the AsTOT-Eh relationship. The model captured the Eh change-point (∼-100±15mV), which was compatible with the data. Importantly, the inclusion of this change-point in the model resulted in improved model fit and prediction accuracy; AsTOT concentrations were strongly negatively related to Eh values higher than the change-point. The process underlying this relationship was subsequently posited to be the reductive dissolution of mineral oxides and As release. Overall, AsTOT showed a weak positive relationship with Eh at a lower range, similar to those commonly observed in the Mekong River basin delta. It is expected that these results would serve as a guide for establishing public health strategies in the Mekong River Basin.
A hierarchical Bayesian modeling approach to searching and stopping in multi-attribute judgment.
van Ravenzwaaij, Don; Moore, Chris P; Lee, Michael D; Newell, Ben R
2014-01-01
In most decision-making situations, there is a plethora of information potentially available to people. Deciding what information to gather and what to ignore is no small feat. How do decision makers determine in what sequence to collect information and when to stop? In two experiments, we administered a version of the German cities task developed by Gigerenzer and Goldstein (1996), in which participants had to decide which of two cities had the larger population. Decision makers were not provided with the names of the cities, but they were able to collect different kinds of cues for both response alternatives (e.g., "Does this city have a university?") before making a decision. Our experiments differed in whether participants were free to determine the number of cues they examined. We demonstrate that a novel model, using hierarchical latent mixtures and Bayesian inference (Lee & Newell, ) provides a more complete description of the data from both experiments than simple conventional strategies, such as the take-the-best or the Weighted Additive heuristics. PMID:24646326
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Hierarchical Bayesian models of cognitive development.
Glassen, Thomas; Nitsch, Verena
2016-06-01
This article provides an introductory overview of the state of research on Hierarchical Bayesian Modeling in cognitive development. First, a brief historical summary and a definition of hierarchies in Bayesian modeling are given. Subsequently, some model structures are described based on four examples in the literature. These are models for the development of the shape bias, for learning ontological kinds and causal schemata as well as for the categorization of objects. The Bayesian modeling approach is then compared with the connectionist and nativist modeling paradigms and considered in view of Marr's (1982) three description levels of information-processing mechanisms. In this context, psychologically plausible algorithms and ideas of their neural implementation are presented. In addition to criticism and limitations of the approach, research needs are identified. PMID:27222110
A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins
Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predic...
The Bayesian Revolution Approaches Psychological Development
ERIC Educational Resources Information Center
Shultz, Thomas R.
2007-01-01
This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…
An Integrated Bayesian Model for DIF Analysis
ERIC Educational Resources Information Center
Soares, Tufi M.; Goncalves, Flavio B.; Gamerman, Dani
2009-01-01
In this article, an integrated Bayesian model for differential item functioning (DIF) analysis is proposed. The model is integrated in the sense of modeling the responses along with the DIF analysis. This approach allows DIF detection and explanation in a simultaneous setup. Previous empirical studies and/or subjective beliefs about the item…
NASA Astrophysics Data System (ADS)
Rubin, Yoram; Chen, Xingyuan; Murakami, Haruko; Hahn, Melanie
2010-10-01
This paper addresses the inverse problem in spatially variable fields such as hydraulic conductivity in groundwater aquifers or rainfall intensity in hydrology. Common to all these problems is the existence of a complex pattern of spatial variability of the target variables and observations, the multiple sources of data available for characterizing the fields, the complex relations between the observed and target variables and the multiple scales and frequencies of the observations. The method of anchored distributions (MAD) that we propose here is a general Bayesian method of inverse modeling of spatial random fields that addresses this complexity. The central elements of MAD are a modular classification of all relevant data and a new concept called "anchors." Data types are classified by the way they relate to the target variable, as either local or nonlocal and as either direct or indirect. Anchors are devices for localization of data: they are used to convert nonlocal, indirect data into local distributions of the target variables. The target of the inversion is the derivation of the joint distribution of the anchors and structural parameters, conditional to all measurements, regardless of scale or frequency of measurement. The structural parameters describe large-scale trends of the target variable fields, whereas the anchors capture local inhomogeneities. Following inversion, the joint distribution of anchors and structural parameters is used for generating random fields of the target variable(s) that are conditioned on the nonlocal, indirect data through their anchor representation. We demonstrate MAD through a detailed case study that assimilates point measurements of the conductivity with head measurements from natural gradient flow. The resulting statistical distributions of the parameters are non-Gaussian. Similarly, the moments of the estimates of the hydraulic head are non-Gaussian. We provide an extended discussion of MAD vis à vis other inversion
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach. PMID:26442771
NASA Astrophysics Data System (ADS)
Kocabas, Verda; Dragicevic, Suzana
2013-10-01
Land-use change models grounded in complexity theory such as agent-based models (ABMs) are increasingly being used to examine evolving urban systems. The objective of this study is to develop a spatial model that simulates land-use change under the influence of human land-use choice behavior. This is achieved by integrating the key physical and social drivers of land-use change using Bayesian networks (BNs) coupled with agent-based modeling. The BNAS model, integrated Bayesian network-based agent system, presented in this study uses geographic information systems, ABMs, BNs, and influence diagram principles to model population change on an irregular spatial structure. The model is parameterized with historical data and then used to simulate 20 years of future population and land-use change for the City of Surrey, British Columbia, Canada. The simulation results identify feasible new urban areas for development around the main transportation corridors. The obtained new development areas and the projected population trajectories with the“what-if” scenario capabilities can provide insights into urban planners for better and more informed land-use policy or decision-making processes.
A Bayesian approach to earthquake source studies
NASA Astrophysics Data System (ADS)
Minson, Sarah
Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also
Rainfall-Runoff Forecast and Model Parameter Estimation: a Dynamic Bayesian Networks Approach
NASA Astrophysics Data System (ADS)
Canon Barriga, J. E.; Morillo Leon, F. C.
2013-12-01
The suggested climate-driven non-stationarities and intrinsic uncertainties of hydrological processes such as precipitation (P) and runoff (R), represent a fruitful context to develop new methods that may be able to detect parametric variations in time series and incorporate them into forecasts. In this research, we developed a method to forecast runoff from precipitation time series based on Dynamic Bayesian Networks (DBN). The purpose of the research was to determine an appropriate structure of the DBN and the optimal lengths of hydrological time series required to establish statistical parameters (i.e., first two moments) of P and optimal fits of forecasted R at daily and weekly intervals. A DBN can be briefly interpreted as a set of nodes (representing conditional probabilistic variables) connected by arrows that establish a causal, time-oriented, relationship among them. A DBN is defined by two components: a static network (structure) and a transition probability matrix between consecutive stages. Similarly to neural networks, DBN must be trained in order to learn about the subjacent process and make useful predictions. To determine the ability of the DBN to forecast R from P we initially generated long synthetic P series and run a deterministic model (HEC-HMS) to generate R. The DBN were then trained with different lengths of these synthetic series to forecast R (using smoothing and filtering methods). Two structures were considered: 1) DBN with P(t), P(t-1) and R(t-1) and 2) DBN with P(t), P(t-1), R(t-1) and ΔR=[R(t-1)-R(t-2)]. Both smoothing and filtering methods were appropriate to make predictions on a daily and weekly basis (filtration performing better). Setting the complexity (number of states of the random variables) in a DBN proves to be a critical issue, since an increase in the number of states, which implies larger training sets, does not always mean an improvement in the prediction. We found that acceptable results could be obtained from DBN
Moving beyond qualitative evaluations of Bayesian models of cognition.
Hemmer, Pernille; Tauber, Sean; Steyvers, Mark
2015-06-01
Bayesian models of cognition provide a powerful way to understand the behavior and goals of individuals from a computational point of view. Much of the focus in the Bayesian cognitive modeling approach has been on qualitative model evaluations, where predictions from the models are compared to data that is often averaged over individuals. In many cognitive tasks, however, there are pervasive individual differences. We introduce an approach to directly infer individual differences related to subjective mental representations within the framework of Bayesian models of cognition. In this approach, Bayesian data analysis methods are used to estimate cognitive parameters and motivate the inference process within a Bayesian cognitive model. We illustrate this integrative Bayesian approach on a model of memory. We apply the model to behavioral data from a memory experiment involving the recall of heights of people. A cross-validation analysis shows that the Bayesian memory model with inferred subjective priors predicts withheld data better than a Bayesian model where the priors are based on environmental statistics. In addition, the model with inferred priors at the individual subject level led to the best overall generalization performance, suggesting that individual differences are important to consider in Bayesian models of cognition.
Bartolucci, Al; Bae, Sejong; Singh, Karan; Griffith, H. Randall
2009-01-01
The mini mental state examination (MMSE) is a common tool for measuring cognitive decline in Alzhiemer’s Disease (AD) subjects. Subjects are usually observed for a specified period of time or until death to determine the trajectory of the decline which for the most part appears to be linear. However, it may be noted that the decline may not be modeled by a single linear model over a specified period of time. There may be a point called a change point where the rate or gradient of the decline may change depending on the length of time of observation. A Bayesian approach is used to model the trajectory and determine an appropriate posterior estimate of the change point as well as the predicted model of decline before and after the change point. Estimates of the appropriate parameters as well as their posterior credible regions or regions of interest are established. Coherent prior to posterior analysis using mainly non informative priors for the parameters of interest is provided. This approach is applied to an existing AD database. PMID:20161460
A Bayesian Approach to Learning Scoring Systems.
Ertekin, Şeyda; Rudin, Cynthia
2015-12-01
We present a Bayesian method for building scoring systems, which are linear models with coefficients that have very few significant digits. Usually the construction of scoring systems involve manual effort-humans invent the full scoring system without using data, or they choose how logistic regression coefficients should be scaled and rounded to produce a scoring system. These kinds of heuristics lead to suboptimal solutions. Our approach is different in that humans need only specify the prior over what the coefficients should look like, and the scoring system is learned from data. For this approach, we provide a Metropolis-Hastings sampler that tends to pull the coefficient values toward their "natural scale." Empirically, the proposed method achieves a high degree of interpretability of the models while maintaining competitive generalization performances. PMID:27441407
A Bayesian Approach to Learning Scoring Systems.
Ertekin, Şeyda; Rudin, Cynthia
2015-12-01
We present a Bayesian method for building scoring systems, which are linear models with coefficients that have very few significant digits. Usually the construction of scoring systems involve manual effort-humans invent the full scoring system without using data, or they choose how logistic regression coefficients should be scaled and rounded to produce a scoring system. These kinds of heuristics lead to suboptimal solutions. Our approach is different in that humans need only specify the prior over what the coefficients should look like, and the scoring system is learned from data. For this approach, we provide a Metropolis-Hastings sampler that tends to pull the coefficient values toward their "natural scale." Empirically, the proposed method achieves a high degree of interpretability of the models while maintaining competitive generalization performances.
Probabilistic Tomography: A Pragmatic Bayesian Approach
NASA Astrophysics Data System (ADS)
Trampert, J.
2014-12-01
'The future lies in uncertainty' (Spiegelhalter, Science, 345, 264, 2014), nothing could be more true for Earth Sciences. We are able to produce ever more sophisticated models but they can only inform us about the Earth in a meaningful way if we can assign uncertainties to the models. Bayesian inference is a natural choice for this task as it handles uncertainty in a natural way by explicitly modeling assumptions. Another desirable property is that Bayes' theorem contains Occam's razor implicitly. I will present our efforts over the that last 10 years to infer Earth properties using an approach we called probabilistic tomography. The word pragmatic has several meanings in this context. In more classical Bayesian inference problems, we usually prescribe subjective or informative priors. I will illustrate this by showing examples which employ the neighborhood algorithm (Sambridge, 1999) or a Metropolis rule (Mosegaard and Tarantola, 1995). Recently we started to use neural networks to parametrize the posterior. In our implementation, we do not sample the posterior directly, but make predictions on some properties of the posterior. The interpretation of the uncertainty is therefore slightly different, but the method informs us on the information gain with respect to the prior. I will show examples on source and structural inversions using so-called mixture density networks.
Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong
2011-06-01
Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method. PMID:21189238
Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong
2011-06-01
Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang
2014-05-01
Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.
A predictive Bayesian approach to risk analysis in health care
Aven, Terje; Eidesen, Karianne
2007-01-01
Background The Bayesian approach is now widely recognised as a proper framework for analysing risk in health care. However, the traditional text-book Bayesian approach is in many cases difficult to implement, as it is based on abstract concepts and modelling. Methods The essential points of the risk analyses conducted according to the predictive Bayesian approach are identification of observable quantities, prediction and uncertainty assessments of these quantities, using all the relevant information. The risk analysis summarizes the knowledge and lack of knowledge concerning critical operations and other activities, and give in this way a basis for making rational decisions. Results It is shown that Bayesian risk analysis can be significantly simplified and made more accessible compared to the traditional text-book Bayesian approach by focusing on predictions of observable quantities and performing uncertainty assessments of these quantities using subjective probabilities. Conclusion The predictive Bayesian approach provides a framework for ensuring quality of risk analysis. The approach acknowledges that risk cannot be adequately described and evaluated simply by reference to summarising probabilities. Risk is defined by the combination of possible consequences and associated uncertainties. PMID:17714597
Hopes and Cautions in Implementing Bayesian Structural Equation Modeling
ERIC Educational Resources Information Center
MacCallum, Robert C.; Edwards, Michael C.; Cai, Li
2012-01-01
Muthen and Asparouhov (2012) have proposed and demonstrated an approach to model specification and estimation in structural equation modeling (SEM) using Bayesian methods. Their contribution builds on previous work in this area by (a) focusing on the translation of conventional SEM models into a Bayesian framework wherein parameters fixed at zero…
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Massoudieh, Arash; Visser, Ate; Sharifi, Soroosh; Broers, Hans Peter
2013-10-15
The mixing of groundwaters with different ages in aquifers, groundwater age is more appropriately represented by a distribution rather than a scalar number. To infer a groundwater age distribution from environmental tracers, a mathematical form is often assumed for the shape of the distribution and the parameters of the mathematical distribution are estimated using deterministic or stochastic inverse methods. We found that the prescription of the mathematical form limits the exploration of the age distribution to the shapes that can be described by the selected distribution. In this paper, the use of freeform histograms as groundwater age distributions is evaluated.more » A Bayesian Markov Chain Monte Carlo approach is used to estimate the fraction of groundwater in each histogram bin. This method was able to capture the shape of a hypothetical gamma distribution from the concentrations of four age tracers. The number of bins that can be considered in this approach is limited based on the number of tracers available. The histogram method was also tested on tracer data sets from Holten (The Netherlands; 3H, 3He, 85Kr, 39Ar) and the La Selva Biological Station (Costa-Rica; SF 6, CFCs, 3H, 4He and 14C), and compared to a number of mathematical forms. According to standard Bayesian measures of model goodness, the best mathematical distribution performs better than the histogram distributions in terms of the ability to capture the observed tracer data relative to their complexity. Among the histogram distributions, the four bin histogram performs better in most of the cases. The Monte Carlo simulations showed strong correlations in the posterior estimates of bin contributions, indicating that these bins cannot be well constrained using the available age tracers. The fact that mathematical forms overall perform better than the freeform histogram does not undermine the benefit of the freeform approach, especially for the cases where a larger amount of observed data is
Massoudieh, Arash; Visser, Ate; Sharifi, Soroosh; Broers, Hans Peter
2013-10-15
The mixing of groundwaters with different ages in aquifers, groundwater age is more appropriately represented by a distribution rather than a scalar number. To infer a groundwater age distribution from environmental tracers, a mathematical form is often assumed for the shape of the distribution and the parameters of the mathematical distribution are estimated using deterministic or stochastic inverse methods. We found that the prescription of the mathematical form limits the exploration of the age distribution to the shapes that can be described by the selected distribution. In this paper, the use of freeform histograms as groundwater age distributions is evaluated. A Bayesian Markov Chain Monte Carlo approach is used to estimate the fraction of groundwater in each histogram bin. This method was able to capture the shape of a hypothetical gamma distribution from the concentrations of four age tracers. The number of bins that can be considered in this approach is limited based on the number of tracers available. The histogram method was also tested on tracer data sets from Holten (The Netherlands; ^{3}H, ^{3}He, ^{85}Kr, ^{39}Ar) and the La Selva Biological Station (Costa-Rica; SF_{ 6}, CFCs, ^{3}H, ^{4}He and ^{14}C), and compared to a number of mathematical forms. According to standard Bayesian measures of model goodness, the best mathematical distribution performs better than the histogram distributions in terms of the ability to capture the observed tracer data relative to their complexity. Among the histogram distributions, the four bin histogram performs better in most of the cases. The Monte Carlo simulations showed strong correlations in the posterior estimates of bin contributions, indicating that these bins cannot be well constrained using the available age tracers. The fact that mathematical forms overall perform better than the freeform histogram does not undermine the benefit of the
Geoacoustic reflectivity inversion: A Bayesian approach
NASA Astrophysics Data System (ADS)
Dettmer, Jan
Propagation and reverberation of acoustic fields in shallow water depend strongly on the spatial variability of seabed geoacoustic parameters; and lack of knowledge of seabed variability is often a limiting factor in acoustic modelling applications. However, direct sampling (e.g., coring) of vertical and lateral variability is expensive and laborious, and matched-field and other long-range inversion methods fail to provide sufficient resolution. This thesis develops a new joint time/frequency domain inversion for high-resolution single-bounce reflection data. The inversion approach has the potential to resolve fine-scale sediment profiles over small seafloor footprints (˜100 m). The approach utilises sequential Bayesian inversion of time- and frequency-domain reflectivity data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection coefficient inversion. Rigorous uncertainty estimation is of key importance to yield high quality inversion results. Quantitative geoacoustic uncertainties are provided by a nonlinear Gibbs sampling approach together with full data error covariance estimation (including non-stationary effects). The small footprint of the measurement technique combined with the rigorous inversion of both time and frequency domain data provides a powerful new tool to examine seabed structure on finer scales than heretofore possible. The Bayesian inversion is applied to two data sets collected on the Malta Plateau and the Strait of Sicily during the SCARAB98 experiment. The first application aims to recover multi-layered seabed structure and the second application recovers density and sound velocity gradient structure in the uppermost sediment layer. An interesting new method of deriving reflectivity data from ambient noise measurements is briefly considered in simulation to examine the resolving power and limits of the approach.
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Lefew, William R; McConnell, Emma R; Crooks, James L; Shafer, Timothy J
2013-05-01
The need to assess large numbers of chemicals for their potential toxicities has resulted in increased emphasis on medium- and high-throughput in vitro screening approaches. For such approaches to be useful, efficient and reliable data analysis and hit detection methods are also required. Assessment of chemical effects on neuronal network activity using microelectrode arrays (MEAs) has been proposed as a screening tool for neurotoxicity. The current study examined a Bayesian data analysis approach for assessing effects of a 30 chemical training set on activity of primary cortical neurons grown in multi-well MEA plates. Each well of the MEA plate contained 64 microelectrodes and the data set contains the number of electrical spikes registered by each electrode over the course of each experiment. A Bayesian data analysis approach was developed and then applied to several different parsings of the data set to produce probability determinations for hit selection and ranking. This methodology results in an approach that is approximately 74% sensitive in detecting chemicals in the training set known to alter neuronal function (23 expected positives) while being 100% specific in detecting chemicals expected to have no effect (7 expected negatives). Additionally, this manuscript demonstrates that the Bayesian approach may be combined with a previously published weighted mean firing rate approach in order to produce a more robust hit detection method. In particular, when combined with the weighted mean firing rate approach, the joint analysis produces a sensitivity of approximately 96% and a specificity of 100%. These results demonstrate the utility of a novel approach to analysis of MEA data and support the use of neuronal networks grown on MEAs as a for neurotoxicity screening approach.
NASA Astrophysics Data System (ADS)
Cucchi, Karina; Flipo, Nicolas; Rivière, Agnès; Rubin, Yoram
2016-04-01
Hydrothermal properties of the stream-aquifer interface are key information for modeling water and heat transfers in hydrological basins. Our study introduces an algorithm to estimate hydrological and thermal parameters of the hyporheic zone (HZ), as well as their associated uncertainties. Properties of the HZ are inferred from a combination of head differential time series and vertically-distributed temperature time series measured continually in a HZ vertical profile. Head differential and two temperature time series are used as boundary conditions for the vertical profile; the other temperature time series are used as conditioning measurements. Following the Bayesian framework, model parameters are treated as random variables and we seek to characterize their probability density function (PDF) conditional on the temperature time series. Our algorithm follows the Method of Anchored Distributions (MAD) implemented in the MAD# software. In order to cut down the number of simulations needed, we develop a hybrid discrete-continuous inversion approach. We first identify the most sensitive parameters in a sensitivity analysis, these parameters are characterized with continuous PDFs. Less sensitive parameters are represented with a discrete PDFs using a finite number of discrete outcomes. We use a non-parametric likelihood function and time series dimension reduction techniques in order to calculate posterior PDFs of HZ parameters. We demonstrate the approach on a synthetic study using an analytical solution and then apply it to field measurements gathered in the Avenelles basin, France. We present one application of this approach, the uncertainty-quantified time series of localized stream-aquifer exchanges.
Particle identification in ALICE: a Bayesian approach
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Benacek, P.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kostarakis, P.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; León Vargas, H.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Moreira De Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paić, G.; Pal, S. K.; Pan, J.; Pandey, A. K.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Pereira Da Costa, H.; Peresunko, D.; Pérez Lara, C. E.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Šándor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Souza, R. D. de; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Valencia Palomo, L.; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, H.; Yang, P.; Yano, S.; Yasin, Z.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.
2016-05-01
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss ( d E/d x) and time of flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high-purity samples of identified particles in the decay channels K0S → π-π+, φ→ K-K+, and Λ→ p π- in p-Pb collisions at √{s_{NN}}=5.02 TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected pT spectra of pions, kaons, protons, and D0 mesons in pp collisions at √{s}=7 TeV. In all cases, the results using Bayesian PID were found to be consistent with previous measurements performed by ALICE using a standard PID approach. For the measurement of D0 → K-π+, it was found that a Bayesian PID approach gave a higher signal-to-background ratio and a similar or larger statistical significance when compared with standard PID selections, despite a reduced identification efficiency. Finally, we present an exploratory study of the measurement of Λc+ → p K-π+ in pp collisions at √{s}=7 TeV, using the Bayesian approach for the identification of its decay products.
Macroscopic hotspots identification: A Bayesian spatio-temporal interaction approach.
Dong, Ni; Huang, Helai; Lee, Jaeyoung; Gao, Mingyun; Abdel-Aty, Mohamed
2016-07-01
This study proposes a Bayesian spatio-temporal interaction approach for hotspot identification by applying the full Bayesian (FB) technique in the context of macroscopic safety analysis. Compared with the emerging Bayesian spatial and temporal approach, the Bayesian spatio-temporal interaction model contributes to a detailed understanding of differential trends through analyzing and mapping probabilities of area-specific crash trends as differing from the mean trend and highlights specific locations where crash occurrence is deteriorating or improving over time. With traffic analysis zones (TAZs) crash data collected in Florida, an empirical analysis was conducted to evaluate the following three approaches for hotspot identification: FB ranking using a Poisson-lognormal (PLN) model, FB ranking using a Bayesian spatial and temporal (B-ST) model and FB ranking using a Bayesian spatio-temporal interaction (B-ST-I) model. The results show that (a) the models accounting for space-time effects perform better in safety ranking than does the PLN model, and (b) the FB approach using the B-ST-I model significantly outperforms the B-ST approach in correctly identifying hotspots by explicitly accounting for the space-time variation in addition to the stable spatial/temporal patterns of crash occurrence. In practice, the B-ST-I approach plays key roles in addressing two issues: (a) how the identified hotspots have evolved over time and (b) the identification of areas that, whilst not yet hotspots, show a tendency to become hotspots. Finally, it can provide guidance to policy decision makers to efficiently improve zonal-level safety.
Macroscopic hotspots identification: A Bayesian spatio-temporal interaction approach.
Dong, Ni; Huang, Helai; Lee, Jaeyoung; Gao, Mingyun; Abdel-Aty, Mohamed
2016-07-01
This study proposes a Bayesian spatio-temporal interaction approach for hotspot identification by applying the full Bayesian (FB) technique in the context of macroscopic safety analysis. Compared with the emerging Bayesian spatial and temporal approach, the Bayesian spatio-temporal interaction model contributes to a detailed understanding of differential trends through analyzing and mapping probabilities of area-specific crash trends as differing from the mean trend and highlights specific locations where crash occurrence is deteriorating or improving over time. With traffic analysis zones (TAZs) crash data collected in Florida, an empirical analysis was conducted to evaluate the following three approaches for hotspot identification: FB ranking using a Poisson-lognormal (PLN) model, FB ranking using a Bayesian spatial and temporal (B-ST) model and FB ranking using a Bayesian spatio-temporal interaction (B-ST-I) model. The results show that (a) the models accounting for space-time effects perform better in safety ranking than does the PLN model, and (b) the FB approach using the B-ST-I model significantly outperforms the B-ST approach in correctly identifying hotspots by explicitly accounting for the space-time variation in addition to the stable spatial/temporal patterns of crash occurrence. In practice, the B-ST-I approach plays key roles in addressing two issues: (a) how the identified hotspots have evolved over time and (b) the identification of areas that, whilst not yet hotspots, show a tendency to become hotspots. Finally, it can provide guidance to policy decision makers to efficiently improve zonal-level safety. PMID:27110645
BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)
We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...
NASA Astrophysics Data System (ADS)
Law, Jane; Quick, Matthew
2013-01-01
This paper adopts a Bayesian spatial modeling approach to investigate the distribution of young offender residences in York Region, Southern Ontario, Canada, at the census dissemination area level. Few geographic researches have analyzed offender (as opposed to offense) data at a large map scale (i.e., using a relatively small areal unit of analysis) to minimize aggregation effects. Providing context is the social disorganization theory, which hypothesizes that areas with economic deprivation, high population turnover, and high ethnic heterogeneity exhibit social disorganization and are expected to facilitate higher instances of young offenders. Non-spatial and spatial Poisson models indicate that spatial methods are superior to non-spatial models with respect to model fit and that index of ethnic heterogeneity, residential mobility (1 year moving rate), and percentage of residents receiving government transfer payments are, respectively, the most significant explanatory variables related to young offender location. These findings provide overwhelming support for social disorganization theory as it applies to offender location in York Region, Ontario. Targeting areas where prevalence of young offenders could or could not be explained by social disorganization through decomposing the estimated risk map are helpful for dealing with juvenile offenders in the region. Results prompt discussion into geographically targeted police services and young offender placement pertaining to risk of recidivism. We discuss possible reasons for differences and similarities between the previous findings (that analyzed offense data and/or were conducted at a smaller map scale) and our findings, limitations of our study, and practical outcomes of this research from a law enforcement perspective.
Fire risk in San Diego County, California: A weighted Bayesian model approach
Kolden, Crystal A.; Weigel, Timothy J.
2007-01-01
Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2008-01-01
Structural equation models are widely appreciated in behavioral, social, and psychological research to model relations between latent constructs and manifest variables, and to control for measurement errors. Most applications of structural equation models are based on fully observed data that are independently distributed. However, hierarchical…
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
Conceição, Katiane S; Andrade, Marinho G; Louzada, Francisco
2013-09-01
In this paper, a Bayesian method for inference is developed for the zero-modified Poisson (ZMP) regression model. This model is very flexible for analyzing count data without requiring any information about inflation or deflation of zeros in the sample. A general class of prior densities based on an information matrix is considered for the model parameters. A sensitivity study to detect influential cases that can change the results is performed based on the Kullback-Leibler divergence. Simulation studies are presented in order to illustrate the performance of the developed methodology. Two real datasets on leptospirosis notification in Bahia State (Brazil) are analyzed using the proposed methodology for the ZMP model.
Bayesian Student Modeling and the Problem of Parameter Specification.
ERIC Educational Resources Information Center
Millan, Eva; Agosta, John Mark; Perez de la Cruz, Jose Luis
2001-01-01
Discusses intelligent tutoring systems and the application of Bayesian networks to student modeling. Considers reasons for not using Bayesian networks, including the computational complexity of the algorithms and the difficulty of knowledge acquisition, and proposes an approach to simplify knowledge acquisition that applies causal independence to…
A Tutorial Introduction to Bayesian Models of Cognitive Development
ERIC Educational Resources Information Center
Perfors, Amy; Tenenbaum, Joshua B.; Griffiths, Thomas L.; Xu, Fei
2011-01-01
We present an introduction to Bayesian inference as it is used in probabilistic models of cognitive development. Our goal is to provide an intuitive and accessible guide to the "what", the "how", and the "why" of the Bayesian approach: what sorts of problems and data the framework is most relevant for, and how and why it may be useful for…
A Bayesian Approach for Image Segmentation with Shape Priors
Chang, Hang; Yang, Qing; Parvin, Bahram
2008-06-20
Color and texture have been widely used in image segmentation; however, their performance is often hindered by scene ambiguities, overlapping objects, or missingparts. In this paper, we propose an interactive image segmentation approach with shape prior models within a Bayesian framework. Interactive features, through mouse strokes, reduce ambiguities, and the incorporation of shape priors enhances quality of the segmentation where color and/or texture are not solely adequate. The novelties of our approach are in (i) formulating the segmentation problem in a well-de?ned Bayesian framework with multiple shape priors, (ii) ef?ciently estimating parameters of the Bayesian model, and (iii) multi-object segmentation through user-speci?ed priors. We demonstrate the effectiveness of our method on a set of natural and synthetic images.
Hierarchical Bayesian model updating for structural identification
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Moaveni, Babak; Lombaert, Geert; Papadimitriou, Costas
2015-12-01
A new probabilistic finite element (FE) model updating technique based on Hierarchical Bayesian modeling is proposed for identification of civil structural systems under changing ambient/environmental conditions. The performance of the proposed technique is investigated for (1) uncertainty quantification of model updating parameters, and (2) probabilistic damage identification of the structural systems. Accurate estimation of the uncertainty in modeling parameters such as mass or stiffness is a challenging task. Several Bayesian model updating frameworks have been proposed in the literature that can successfully provide the "parameter estimation uncertainty" of model parameters with the assumption that there is no underlying inherent variability in the updating parameters. However, this assumption may not be valid for civil structures where structural mass and stiffness have inherent variability due to different sources of uncertainty such as changing ambient temperature, temperature gradient, wind speed, and traffic loads. Hierarchical Bayesian model updating is capable of predicting the overall uncertainty/variability of updating parameters by assuming time-variability of the underlying linear system. A general solution based on Gibbs Sampler is proposed to estimate the joint probability distributions of the updating parameters. The performance of the proposed Hierarchical approach is evaluated numerically for uncertainty quantification and damage identification of a 3-story shear building model. Effects of modeling errors and incomplete modal data are considered in the numerical study.
Advances in Bayesian Modeling in Educational Research
ERIC Educational Resources Information Center
Levy, Roy
2016-01-01
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Stern, Adi; Doron-Faigenboim, Adi; Erez, Elana; Martz, Eric; Bacharach, Eran; Pupko, Tal
2007-07-01
Biologically significant sites in a protein may be identified by contrasting the rates of synonymous (K(s)) and non-synonymous (K(a)) substitutions. This enables the inference of site-specific positive Darwinian selection and purifying selection. We present here Selecton version 2.2 (http://selecton.bioinfo.tau.ac.il), a web server which automatically calculates the ratio between K(a) and K(s) (omega) at each site of the protein. This ratio is graphically displayed on each site using a color-coding scheme, indicating either positive selection, purifying selection or lack of selection. Selecton implements an assembly of different evolutionary models, which allow for statistical testing of the hypothesis that a protein has undergone positive selection. Specifically, the recently developed mechanistic-empirical model is introduced, which takes into account the physicochemical properties of amino acids. Advanced options were introduced to allow maximal fine tuning of the server to the user's specific needs, including calculation of statistical support of the omega values, an advanced graphic display of the protein's 3-dimensional structure, use of different genetic codes and inputting of a pre-built phylogenetic tree. Selecton version 2.2 is an effective, user-friendly and freely available web server which implements up-to-date methods for computing site-specific selection forces, and the visualization of these forces on the protein's sequence and structure.
A Bayesian Approach to Interactive Retrieval
ERIC Educational Resources Information Center
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
A Bayesian Approach to Sensor Characterization
NASA Technical Reports Server (NTRS)
Timucin, Dogan A.
2003-01-01
The physical model of a generic electro-optic sensor is derived and incorporated into a Bayesian framework for the estimation of key instrument parameters from calibration data. The sensor characterization thus achieved enables optimal subsequent removal of instrument effects from field data, leading to the highest possible accuracy in the retrieved physical quantities.
Hierarchical Bayesian Approach to Locating Seismic Events
Johannesson, G; Myers, S C; Hanley, W G
2005-11-09
We propose a hierarchical Bayesian model for conducting inference on the location of multiple seismic events (earthquakes) given data on the arrival of various seismic phases to sensor locations. The model explicitly accounts for the uncertainty associated with a theoretical seismic-wave travel-time model used along with the uncertainty of the arrival data. Posterior inferences is carried out using Markov chain Monte Carlo (MCMC).
A guide to Bayesian model selection for ecologists
Hooten, Mevin B.; Hobbs, N.T.
2015-01-01
The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.
Ma, Junsheng; Chan, Wenyaw; Tsai, Chu-Lin; Xiong, Momiao; Tilley, Barbara C
2015-11-30
Continuous time Markov chain (CTMC) models are often used to study the progression of chronic diseases in medical research but rarely applied to studies of the process of behavioral change. In studies of interventions to modify behaviors, a widely used psychosocial model is based on the transtheoretical model that often has more than three states (representing stages of change) and conceptually permits all possible instantaneous transitions. Very little attention is given to the study of the relationships between a CTMC model and associated covariates under the framework of transtheoretical model. We developed a Bayesian approach to evaluate the covariate effects on a CTMC model through a log-linear regression link. A simulation study of this approach showed that model parameters were accurately and precisely estimated. We analyzed an existing data set on stages of change in dietary intake from the Next Step Trial using the proposed method and the generalized multinomial logit model. We found that the generalized multinomial logit model was not suitable for these data because it ignores the unbalanced data structure and temporal correlation between successive measurements. Our analysis not only confirms that the nutrition intervention was effective but also provides information on how the intervention affected the transitions among the stages of change. We found that, compared with the control group, subjects in the intervention group, on average, spent substantively less time in the precontemplation stage and were more/less likely to move from an unhealthy/healthy state to a healthy/unhealthy state. PMID:26123093
A Bayesian approach to person perception.
Clifford, C W G; Mareschal, I; Otsuka, Y; Watson, T L
2015-11-01
Here we propose a Bayesian approach to person perception, outlining the theoretical position and a methodological framework for testing the predictions experimentally. We use the term person perception to refer not only to the perception of others' personal attributes such as age and sex but also to the perception of social signals such as direction of gaze and emotional expression. The Bayesian approach provides a formal description of the way in which our perception combines current sensory evidence with prior expectations about the structure of the environment. Such expectations can lead to unconscious biases in our perception that are particularly evident when sensory evidence is uncertain. We illustrate the ideas with reference to our recent studies on gaze perception which show that people have a bias to perceive the gaze of others as directed towards themselves. We also describe a potential application to the study of the perception of a person's sex, in which a bias towards perceiving males is typically observed.
Posterior predictive Bayesian phylogenetic model selection.
Lewis, Paul O; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-05-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand-Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. PMID:24193892
A Bayesian approach to reliability and confidence
NASA Technical Reports Server (NTRS)
Barnes, Ron
1989-01-01
The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Modeling Diagnostic Assessments with Bayesian Networks
ERIC Educational Resources Information Center
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego
2007-01-01
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Jones, Matt; Love, Bradley C
2011-08-01
The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology - namely, Behaviorism and evolutionary psychology - that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls
Radioactive Contraband Detection: A Bayesian Approach
Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Sale, K; Chambers, D; Axelrod, M; Meyer, A
2009-03-16
Radionuclide emissions from nuclear contraband challenge both detection and measurement technologies to capture and record each event. The development of a sequential Bayesian processor incorporating both the physics of gamma-ray emissions and the measurement of photon energies offers a physics-based approach to attack this challenging problem. It is shown that a 'physics-based' structure can be used to develop an effective detection technique, but also motivates the implementation of this approach using or particle filters to enhance and extract the required information.
Bayesian population modeling of drug dosing adherence.
Fellows, Kelly; Stoneking, Colin J; Ramanathan, Murali
2015-10-01
Adherence is a frequent contributing factor to variations in drug concentrations and efficacy. The purpose of this work was to develop an integrated population model to describe variation in adherence, dose-timing deviations, overdosing and persistence to dosing regimens. The hybrid Markov chain-von Mises method for modeling adherence in individual subjects was extended to the population setting using a Bayesian approach. Four integrated population models for overall adherence, the two-state Markov chain transition parameters, dose-timing deviations, overdosing and persistence were formulated and critically compared. The Markov chain-Monte Carlo algorithm was used for identifying distribution parameters and for simulations. The model was challenged with medication event monitoring system data for 207 hypertension patients. The four Bayesian models demonstrated good mixing and convergence characteristics. The distributions of adherence, dose-timing deviations, overdosing and persistence were markedly non-normal and diverse. The models varied in complexity and the method used to incorporate inter-dependence with the preceding dose in the two-state Markov chain. The model that incorporated a cooperativity term for inter-dependence and a hyperbolic parameterization of the transition matrix probabilities was identified as the preferred model over the alternatives. The simulated probability densities from the model satisfactorily fit the observed probability distributions of adherence, dose-timing deviations, overdosing and persistence parameters in the sample patients. The model also adequately described the median and observed quartiles for these parameters. The Bayesian model for adherence provides a parsimonious, yet integrated, description of adherence in populations. It may find potential applications in clinical trial simulations and pharmacokinetic-pharmacodynamic modeling. PMID:26319548
A Bayesian sequential processor approach to spectroscopic portal system decisions
Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D
2007-07-31
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.
NASA Astrophysics Data System (ADS)
Wang, Hui; Wellmann, Florian
2016-04-01
It is generally accepted that 3D geological models inferred from observed data will contain a certain amount of uncertainties. The uncertainty quantification and stochastic sampling methods are essential for gaining the insight into the geological variability of subsurface structures. In the community of deterministic or traditional modelling techniques, classical geo-statistical methods using boreholes (hard data sets) are still most widely accepted although suffering certain drawbacks. Modern geophysical measurements provide us regional data sets in 2D or 3D spaces either directly from sensors or indirectly from inverse problem solving using observed signal (soft data sets). We propose a stochastic modelling framework to extract subsurface heterogeneity from multiple and complementary types of data. In the presented work, subsurface heterogeneity is considered as the "hidden link" among multiple spatial data sets as well as inversion results. Hidden Markov random field models are employed to perform 3D segmentation which is the representation of the "hidden link". Finite Gaussian mixture models are adopted to characterize the statistical parameters of the multiple data sets. The uncertainties are quantified via a Gibbs sampling process under the Bayesian inferential framework. The proposed modelling framework is validated using two numerical examples. The model behavior and convergence are also well examined. It is shown that the presented stochastic modelling framework is a promising tool for the 3D data fusion in the communities of geological modelling and geophysics.
Pan, Shin-Liang; Wu, Hui-Min; Yen, Amy Ming-Fang; Chen, Tony Hsiu-Hsi
2007-12-20
Few attempts have been made to model the dynamics of stroke-related disability. It is possible though, using panel data and multi-state Markov regression models that incorporate measured covariates and latent variables (random effects). This study aimed to model a series of functional transitions (following a first stroke) using a three-state Markov model with or without considering random effects. Several proportional hazards parameterizations were considered. A Bayesian approach that utilizes the Markov Chain Monte Carlo (MCMC) and Gibbs sampling functionality of WinBUGS (a Windows-based Bayesian software package) was developed to generate the marginal posterior distributions of the various transition parameters (e.g. the transition rates and transition probabilities). Model building and comparisons was guided by reference to the deviance information criteria (DIC). Of the four proportional hazards models considered, exponential regression was preferred because it led to the smallest deviances. Adding random effects further improved the model fit. Of the covariates considered, only age, infarct size, and baseline functional status were significant. By using our final model we were able to make individual predictions about functional recovery in stroke patients. PMID:17676712
A Bayesian approach for combining thermal and hydraulic data
NASA Astrophysics Data System (ADS)
Woodbury, Allan D.
Incorporating temperatures into a modeling effort can take many forms, and both temperatures and hydrologic data can be combined qualitatively and quantitatively. In the latter category, the least formal would be in calibration, followed by parameter estimation and finally by full-inversion. This paper discusses information-based (specifically Bayesian) approaches of incorporating hydraulic parameters and potentials like temperature and hydraulic head together in a formal procedure. This paper reviews the generalized inverse problem for groundwater and heat; discusses Bayesian solutions to inverse problems; empirical and hierarchical Bayes, upscaling and cokriging and Bayesian interpolation. Along these lines, a list of suggested references is provided, along with suitable mentioning of benchmark papers, monographs and textbooks on the subject. The technique described in this paper revolves around shallow, low-temperature groundwater flow systems; and that entails steady 2-D fluid and heat flow. The methodology utilizes a perturbation technique to linearize and then couple the governing equations. For the perturbation approach to work, fluid properties must be decoupled from the temperature field. Once this is done, and through the finite element method, a block-linear system of data, kernel, and model parameters is developed. Two end-members and one set of joint inverse examples are presented. The two end-members are pure heat conduction (an application of Bayesian inversion to Paleoclimate reconstructions), and a pure-groundwater problem which is an example application to the Edwards Aquifer in Texas. Lastly, generic examples of combinations of transmissivity, hydraulic head and temperatures are presented.
Experience With Bayesian Image Based Surface Modeling
NASA Technical Reports Server (NTRS)
Stutz, John C.
2005-01-01
Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.
Bayesian inference for OPC modeling
NASA Astrophysics Data System (ADS)
Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.
2016-03-01
The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.
A Bayesian approach to simultaneously quantify assignments and linguistic uncertainty
Chavez, Gregory M; Booker, Jane M; Ross, Timothy J
2010-10-07
Subject matter expert assessments can include both assignment and linguistic uncertainty. This paper examines assessments containing linguistic uncertainty associated with a qualitative description of a specific state of interest and the assignment uncertainty associated with assigning a qualitative value to that state. A Bayesian approach is examined to simultaneously quantify both assignment and linguistic uncertainty in the posterior probability. The approach is applied to a simplified damage assessment model involving both assignment and linguistic uncertainty. The utility of the approach and the conditions under which the approach is feasible are examined and identified.
Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies
ERIC Educational Resources Information Center
Chen, Jianshen; Kaplan, David
2015-01-01
Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…
Stock, Eileen M.; Kimbrel, Nathan A.; Meyer, Eric C.; Copeland, Laurel A.; Monte, Ralph; Zeber, John E.; Gulliver, Suzy Bird; Morissette, Sandra B.
2016-01-01
Many Veterans from the conflicts in Iraq and Afghanistan return home with physical and psychological impairments that impact their ability to enjoy normal life activities and diminish their quality of life (QoL). The present research aimed to identify predictors of QoL over an 8-month period using Bayesian model averaging (BMA), which is a statistical technique useful for maximizing power with smaller sample sizes. A sample of 117 Iraq and Afghanistan Veterans receiving care in a southwestern healthcare system was recruited, and BMA examined the impact of key demographics (e.g., age, gender), diagnoses (e.g., depression), and treatment modalities (e.g., individual therapy, medication) on QoL over time. Multiple imputation based on Gibbs sampling was employed for incomplete data (6.4% missingness). Average follow-up QoL scores were significantly lower than at baseline (73.2 initial vs 69.5 4-month and 68.3 8-month). Employment was associated with increased QoL during each follow-up, while posttraumatic stress disorder and black race were inversely related. Additionally, predictive models indicated that depression, income, treatment for a medical condition, and group psychotherapy were strong negative predictors of 4-month QoL but not 8-month QoL. PMID:24942672
Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.
ERIC Educational Resources Information Center
Tirri, Henry; And Others
A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…
Bayesian Estimation of the Logistic Positive Exponent IRT Model
ERIC Educational Resources Information Center
Bolfarine, Heleno; Bazan, Jorge Luis
2010-01-01
A Bayesian inference approach using Markov Chain Monte Carlo (MCMC) is developed for the logistic positive exponent (LPE) model proposed by Samejima and for a new skewed Logistic Item Response Theory (IRT) model, named Reflection LPE model. Both models lead to asymmetric item characteristic curves (ICC) and can be appropriate because a symmetric…
NASA Astrophysics Data System (ADS)
Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.
2016-11-01
Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has
A Bayesian approach for calibrating probability judgments
NASA Astrophysics Data System (ADS)
Firmino, Paulo Renato A.; Santana, Nielson A.
2012-10-01
Eliciting experts' opinions has been one of the main alternatives for addressing paucity of data. In the vanguard of this area is the development of calibration models (CMs). CMs are models dedicated to overcome miscalibration, i.e. judgment biases reflecting deficient strategies of reasoning adopted by the expert when inferring about an unknown. One of the main challenges of CMs is to determine how and when to intervene against miscalibration, in order to enhance the tradeoff between costs (time spent with calibration processes) and accuracy of the resulting models. The current paper dedicates special attention to this issue by presenting a dynamic Bayesian framework for monitoring, diagnosing, and handling miscalibration patterns. The framework is based on Beta-, Uniform, or Triangular-Bernoulli models and classes of judgmental calibration theories. Issues regarding the usefulness of the proposed framework are discussed and illustrated via simulation studies.
Lindström, Tom; Grear, Daniel A; Buhnerkempe, Michael; Webb, Colleen T; Miller, Ryan S; Portacci, Katie; Wennergren, Uno
2013-01-01
Networks are rarely completely observed and prediction of unobserved edges is an important problem, especially in disease spread modeling where networks are used to represent the pattern of contacts. We focus on a partially observed cattle movement network in the U.S. and present a method for scaling up to a full network based on bayesian inference, with the aim of informing epidemic disease spread models in the United States. The observed network is a 10% state stratified sample of Interstate Certificates of Veterinary Inspection that are required for interstate movement; describing approximately 20,000 movements from 47 of the contiguous states, with origins and destinations aggregated at the county level. We address how to scale up the 10% sample and predict unobserved intrastate movements based on observed movement distances. Edge prediction based on a distance kernel is not straightforward because the probability of movement does not always decline monotonically with distance due to underlying industry infrastructure. Hence, we propose a spatially explicit model where the probability of movement depends on distance, number of premises per county and historical imports of animals. Our model performs well in recapturing overall metrics of the observed network at the node level (U.S. counties), including degree centrality and betweenness; and performs better compared to randomized networks. Kernel generated movement networks also recapture observed global network metrics, including network size, transitivity, reciprocity, and assortativity better than randomized networks. In addition, predicted movements are similar to observed when aggregated at the state level (a broader geographic level relevant for policy) and are concentrated around states where key infrastructures, such as feedlots, are common. We conclude that the method generally performs well in predicting both coarse geographical patterns and network structure and is a promising method to generate full
Latent features in similarity judgments: a nonparametric bayesian approach.
Navarro, Daniel J; Griffiths, Thomas L
2008-11-01
One of the central problems in cognitive science is determining the mental representations that underlie human inferences. Solutions to this problem often rely on the analysis of subjective similarity judgments, on the assumption that recognizing likenesses between people, objects, and events is crucial to everyday inference. One such solution is provided by the additive clustering model, which is widely used to infer the features of a set of stimuli from their similarities, on the assumption that similarity is a weighted linear function of common features. Existing approaches for implementing additive clustering often lack a complete framework for statistical inference, particularly with respect to choosing the number of features. To address these problems, this article develops a fully Bayesian formulation of the additive clustering model, using methods from nonparametric Bayesian statistics to allow the number of features to vary. We use this to explore several approaches to parameter estimation, showing that the nonparametric Bayesian approach provides a straightforward way to obtain estimates of both the number of features and their importance. PMID:18533818
NASA Astrophysics Data System (ADS)
Ogle, K.; Cable, J. M.; Huxman, T. E.
2006-12-01
The respiratory loss of carbon from terrestrial ecosystems is a major carbon flux affecting local, regional, and global carbon cycling. Such losses (e.g., soil CO2 efflux), however, are often overly simplified in biogeochemical models compared to processes such as photosynthesis. This discrepancy is partly due to the difficulty associated with partitioning soil respiration (or CO2 efflux) into its various components (e.g., autotrophic vs. heterotrophic). Different components operate at dissimilar temporal and spatial scales, thus estimation of their relative activity based on bulk soil efflux measurements is challenging. Hence, development of a robust, biophysically-inspired method for partitioning the different components is paramount to teasing- apart the mechanisms underlying carbon source-sink dynamics within and across diverse landscapes. Towards this goal, we developed a semi-mechanistic Bayesian deconvolution modeling approach for partitioning soil respiration into its component sources. While the sources can be broadly categorized as autotrophic or heterotrophic, the fundamental sources of biogenic CO2 efflux arises from specific interactions between plants, micro-organisms, and the soil environment. Potential sources have been identified based on their different turnover rates and functional roles, including, (1) activity of roots, (2) rhizomicrobial (e.g., mycorrhiza) respiration, (3) microbial decomposition of plant tissues, (4) microbial activity primed by root exudation, and (5) microbial decomposition of soil organic matter. The relative contribution of each source to soil CO2 efflux can vary within the soil matrix, depending on spatial and temporal variability in soil properties, resource and substrate availability, and microclimate. Our Bayesian deconvolution framework allows for simultaneous analysis of multiple data sources related to soil respiration dynamics, and the data are analyzed within the context of process-based models. The data include
Parameter estimation of general regression neural network using Bayesian approach
NASA Astrophysics Data System (ADS)
Choir, Achmad Syahrul; Prasetyo, Rindang Bangun; Ulama, Brodjol Sutijo Suprih; Iriawan, Nur; Fitriasari, Kartika; Dokhi, Mohammad
2016-02-01
General Regression Neural Network (GRNN) has been applied in a large number of forecasting/prediction problem. Generally, there are two types of GRNN: GRNN which is based on kernel density; and Mixture Based GRNN (MBGRNN) which is based on adaptive mixture model. The main problem on GRNN modeling lays on how its parameters were estimated. In this paper, we propose Bayesian approach and its computation using Markov Chain Monte Carlo (MCMC) algorithms for estimating the MBGRNN parameters. This method is applied in simulation study. In this study, its performances are measured by using MAPE, MAE and RMSE. The application of Bayesian method to estimate MBGRNN parameters using MCMC is straightforward but it needs much iteration to achieve convergence.
Bayesian Methods for High Dimensional Linear Models
Mallick, Himel; Yi, Nengjun
2013-01-01
In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow’s Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions. PMID:24511433
Bayesian Methods for High Dimensional Linear Models.
Mallick, Himel; Yi, Nengjun
2013-06-01
In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow's Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions.
2014-01-01
Background Transmission models can aid understanding of disease dynamics and are useful in testing the efficiency of control measures. The aim of this study was to formulate an appropriate stochastic Susceptible-Infectious-Resistant/Carrier (SIR) model for Salmonella Typhimurium in pigs and thus estimate the transmission parameters between states. Results The transmission parameters were estimated using data from a longitudinal study of three Danish farrow-to-finish pig herds known to be infected. A Bayesian model framework was proposed, which comprised Binomial components for the transition from susceptible to infectious and from infectious to carrier; and a Poisson component for carrier to infectious. Cohort random effects were incorporated into these models to allow for unobserved cohort-specific variables as well as unobserved sources of transmission, thus enabling a more realistic estimation of the transmission parameters. In the case of the transition from susceptible to infectious, the cohort random effects were also time varying. The number of infectious pigs not detected by the parallel testing was treated as unknown, and the probability of non-detection was estimated using information about the sensitivity and specificity of the bacteriological and serological tests. The estimate of the transmission rate from susceptible to infectious was 0.33 [0.06, 1.52], from infectious to carrier was 0.18 [0.14, 0.23] and from carrier to infectious was 0.01 [0.0001, 0.04]. The estimate for the basic reproduction ration (R 0 ) was 1.91 [0.78, 5.24]. The probability of non-detection was estimated to be 0.18 [0.12, 0.25]. Conclusions The proposed framework for stochastic SIR models was successfully implemented to estimate transmission rate parameters for Salmonella Typhimurium in swine field data. R 0 was 1.91, implying that there was dissemination of the infection within pigs of the same cohort. There was significant temporal-cohort variability, especially at the
Bayesian Modeling of a Human MMORPG Player
NASA Astrophysics Data System (ADS)
Synnaeve, Gabriel; Bessière, Pierre
2011-03-01
This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.
A Bayesian ensemble approach for epidemiological projections.
Lindström, Tom; Tildesley, Michael; Webb, Colleen
2015-04-01
Mathematical models are powerful tools for epidemiology and can be used to compare control actions. However, different models and model parameterizations may provide different prediction of outcomes. In other fields of research, ensemble modeling has been used to combine multiple projections. We explore the possibility of applying such methods to epidemiology by adapting Bayesian techniques developed for climate forecasting. We exemplify the implementation with single model ensembles based on different parameterizations of the Warwick model run for the 2001 United Kingdom foot and mouth disease outbreak and compare the efficacy of different control actions. This allows us to investigate the effect that discrepancy among projections based on different modeling assumptions has on the ensemble prediction. A sensitivity analysis showed that the choice of prior can have a pronounced effect on the posterior estimates of quantities of interest, in particular for ensembles with large discrepancy among projections. However, by using a hierarchical extension of the method we show that prior sensitivity can be circumvented. We further extend the method to include a priori beliefs about different modeling assumptions and demonstrate that the effect of this can have different consequences depending on the discrepancy among projections. We propose that the method is a promising analytical tool for ensemble modeling of disease outbreaks.
A hierarchical variational Bayesian approximation approach in acoustic imaging
NASA Astrophysics Data System (ADS)
Chu, Ning; Mohammad-Djafari, Ali; Gac, Nicolas; Picheral, José
2015-01-01
Acoustic imaging is a powerful technique for acoustic source localization and power reconstruction from limited noisy measurements at microphone sensors. But it inevitably confronts a very ill-posed inverse problem which causes unexpected solution uncertainty. Recently, the Bayesian inference methods using sparse priors have been effectively investigated. In this paper, we propose to use a hierarchical variational Bayesian approximation for robust acoustic imaging. And we explore the Student-t priors with heavy tails to enforce source sparsity, and to model non-Gaussian noise respectively. Compared to conventional methods, the proposed approach can achieve the higher spatial resolution and wider dynamic range of source powers for real data from automobile wind tunnel.
Coggins, Lewis G; Bacheler, Nathan M; Gwinn, Daniel C
2014-01-01
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors
Coggins, Lewis G; Bacheler, Nathan M; Gwinn, Daniel C
2014-01-01
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors
Coggins, Lewis G.; Bacheler, Nathan M.; Gwinn, Daniel C.
2014-01-01
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling.
Bayesian failure probability model sensitivity study. Final report
Not Available
1986-05-30
The Office of the Manager, National Communications System (OMNCS) has developed a system-level approach for estimating the effects of High-Altitude Electromagnetic Pulse (HEMP) on the connectivity of telecommunications networks. This approach incorporates a Bayesian statistical model which estimates the HEMP-induced failure probabilities of telecommunications switches and transmission facilities. The purpose of this analysis is to address the sensitivity of the Bayesian model. This is done by systematically varying two model input parameters--the number of observations, and the equipment failure rates. Throughout the study, a non-informative prior distribution is used. The sensitivity of the Bayesian model to the noninformative prior distribution is investigated from a theoretical mathematical perspective.
Bayesian Analysis of Order-Statistics Models for Ranking Data.
ERIC Educational Resources Information Center
Yu, Philip L. H.
2000-01-01
Studied the order-statistics models, extending the usual normal order-statistics model into one in which the underlying random variables followed a multivariate normal distribution. Used a Bayesian approach and the Gibbs sampling technique. Applied the proposed method to analyze presidential election data from the American Psychological…
Two-Stage Bayesian Model Averaging in Endogenous Variable Models.
Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E
2014-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed.
Two-Stage Bayesian Model Averaging in Endogenous Variable Models.
Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E
2014-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471
Olmos, Antonio; Bertolini, Edson; Ruiz-García, Ana B; Martínez, Carmen; Peiró, Rosa; Vidal, Eduardo
2016-05-01
Grapevine leafroll-associated virus 3 (GLRaV-3) has a worldwide distribution and is the most economically important virus that causes grapevine leafroll disease. Reliable, sensitive, and specific methods are required for the detection of the pathogen in order to assure the production of healthy plant material and control of the disease. Although different serological and nucleic acid-based methods have been developed for the detection of GLRaV-3, diagnostic parameters have not been established, and there is no gold standard method. Therefore, the main aim of this work was to determine the sensitivity, specificity, and likelihood ratios of three commonly used methods, including one serological test (double-antibody sandwich enzyme-linked immunosorbent assay [DAS-ELISA]) and two nucleic acid-based techniques (spot and conventional real-time reverse transcription-polymerase chain reaction [RT-PCR]). Latent class models using a Bayesian approach have been applied to determine diagnostic test parameters and to facilitate decision-making regarding diagnostic test selection. Statistical analysis has been based on the results of a total of 281 samples, which were collected during the dormant period from three different populations. The best-fit model out of the 49 implemented models revealed that DAS-ELISA was the most specific method (value = 0.99) and provided the highest degree of confidence in positive results. Conversely, conventional real-time RT-PCR was the most sensitive method (value = 0.98) and produced the highest degree of confidence in negative results. Furthermore, the estimation of likelihood ratios showed that in populations with low GLRaV-3 prevalence the most appropriate method could be DAS-ELISA, while conventional real-time RT-PCR could be the most appropriate method in medium or high prevalence populations. Combining both techniques significantly increases detection accuracy. The flexibility and power of Bayesian latent class models open new
ERIC Educational Resources Information Center
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Bayesian analysis of the backreaction models
Kurek, Aleksandra; Bolejko, Krzysztof; Szydlowski, Marek
2010-03-15
We present a Bayesian analysis of four different types of backreaction models, which are based on the Buchert equations. In this approach, one considers a solution to the Einstein equations for a general matter distribution and then an average of various observable quantities is taken. Such an approach became of considerable interest when it was shown that it could lead to agreement with observations without resorting to dark energy. In this paper we compare the {Lambda}CDM model and the backreaction models with type Ia supernovae, baryon acoustic oscillations, and cosmic microwave background data, and find that the former is favored. However, the tested models were based on some particular assumptions about the relation between the average spatial curvature and the backreaction, as well as the relation between the curvature and curvature index. In this paper we modified the latter assumption, leaving the former unchanged. We find that, by varying the relation between the curvature and curvature index, we can obtain a better fit. Therefore, some further work is still needed--in particular, the relation between the backreaction and the curvature should be revisited in order to fully determine the feasibility of the backreaction models to mimic dark energy.
Testing Bayesian models of human coincidence timing.
Miyazaki, Makoto; Nozaki, Daichi; Nakajima, Yasoichi
2005-07-01
A sensorimotor control task often requires an accurate estimation of the timing of the arrival of an external target (e.g., when hitting a pitched ball). Conventional studies of human timing processes have ignored the stochastic features of target timing: e.g., the speed of the pitched ball is not generally constant, but is variable. Interestingly, based on Bayesian theory, it has been recently shown that the human sensorimotor system achieves the optimal estimation by integrating sensory information with prior knowledge of the probabilistic structure of the target variation. In this study, we tested whether Bayesian integration is also implemented while performing a coincidence-timing type of sensorimotor task by manipulating the trial-by-trial variability (i.e., the prior distribution) of the target timing. As a result, within several hundred trials of learning, subjects were able to generate systematic timing behavior according to the width of the prior distribution, as predicted by the optimal Bayesian model. Considering the previous studies showing that the human sensorimotor system uses Bayesian integration in spacing and force-grading tasks, our result indicates that Bayesian integration is fundamental to all aspects of human sensorimotor control. Moreover, it was noteworthy that the subjects could adjust their behavior both when the prior distribution was switched from wide to narrow and vice versa, although the adjustment was slower in the former case. Based on a comparison with observations in a previous study, we discuss the flexibility and adaptability of Bayesian sensorimotor learning.
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Bayesian modeling of flexible cognitive control
Jiang, Jiefeng; Heller, Katherine; Egner, Tobias
2014-01-01
“Cognitive control” describes endogenous guidance of behavior in situations where routine stimulus-response associations are suboptimal for achieving a desired goal. The computational and neural mechanisms underlying this capacity remain poorly understood. We examine recent advances stemming from the application of a Bayesian learner perspective that provides optimal prediction for control processes. In reviewing the application of Bayesian models to cognitive control, we note that an important limitation in current models is a lack of a plausible mechanism for the flexible adjustment of control over conflict levels changing at varying temporal scales. We then show that flexible cognitive control can be achieved by a Bayesian model with a volatility-driven learning mechanism that modulates dynamically the relative dependence on recent and remote experiences in its prediction of future control demand. We conclude that the emergent Bayesian perspective on computational mechanisms of cognitive control holds considerable promise, especially if future studies can identify neural substrates of the variables encoded by these models, and determine the nature (Bayesian or otherwise) of their neural implementation. PMID:24929218
Bayesian model selection for LISA pathfinder
NASA Astrophysics Data System (ADS)
Karnesis, Nikolaos; Nofrarias, Miquel; Sopuerta, Carlos F.; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; McNamara, Paul W.; Plagnol, Eric; Vitale, Stefano
2014-03-01
The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the eLISA concept. The data analysis team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment onboard the LPF. These models are used for simulations, but, more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the data analysis team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching this problem is to recover the essential parameters of a LTP model fitting the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes factor between two competing models. In our analysis, we use three main different methods to estimate it: the reversible jump Markov chain Monte Carlo method, the Schwarz criterion, and the Laplace approximation. They are applied to simulated LPF experiments in which the most probable LTP model that explains the observations is recovered. The same type of analysis presented in this paper is expected to be followed during flight operations. Moreover, the correlation of the output of the aforementioned methods with the design of the experiment is explored.
Involving Stakeholders in Building Integrated Fisheries Models Using Bayesian Methods
NASA Astrophysics Data System (ADS)
Haapasaari, Päivi; Mäntyniemi, Samu; Kuikka, Sakari
2013-06-01
A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts.
Dynamic Bayesian Network Modeling of Game Based Diagnostic Assessments. CRESST Report 837
ERIC Educational Resources Information Center
Levy, Roy
2014-01-01
Digital games offer an appealing environment for assessing student proficiencies, including skills and misconceptions in a diagnostic setting. This paper proposes a dynamic Bayesian network modeling approach for observations of student performance from an educational video game. A Bayesian approach to model construction, calibration, and use in…
A Dynamic Bayesian Network Approach to Location Prediction in Ubiquitous Computing Environments
NASA Astrophysics Data System (ADS)
Lee, Sunyoung; Lee, Kun Chang; Cho, Heeryon
The ability to predict the future contexts of users significantly improves service quality and user satisfaction in ubiquitous computing environments. Location prediction is particularly useful because ubiquitous computing environments can dynamically adapt their behaviors according to a user's future location. In this paper, we present an inductive approach to recognizing a user's location by establishing a dynamic Bayesian network model. The dynamic Bayesian network model has been evaluated with a set of contextual data collected from undergraduate students. The evaluation result suggests that a dynamic Bayesian network model offers significant predictive power.
Evaluating Individualized Reading Programs: A Bayesian Model.
ERIC Educational Resources Information Center
Maxwell, Martha
Simple Bayesian approaches can be applied to answer specific questions in evaluating an individualized reading program. A small reading and study skills program located in the counseling center of a major research university collected and compiled data on student characteristics such as class, number of sessions attended, grade point average, and…
Newton, Einstein, Jeffreys and Bayesian model selection
NASA Astrophysics Data System (ADS)
Chettri, Samir; Batchelor, David; Campbell, William; Balakrishnan, Karthik
2005-11-01
In Jefferys and Berger apply Bayesian model selection to the problem of choosing between rival theories, in particular between Einstein's theory of general relativity (GR) and Newtonian gravity (NG). [1] presents a debate between Harold Jeffreys and Charles Poor regarding the observed 43''/century anomalous perhelion precession of Mercury. GR made a precise prediction of 42.98''/century while proponents of NG suggested several physical mechanisms that were eventually refuted, with the exception of a modified inverse square law. Using Bayes Factors (BF) and data available in 1921, shows that GR is preferable to NG by a factor of about 25 to 1. A scale for BF used by Jeffreys, suggests that this is positive to strong evidence for GR over modified NG but it is not very strong or even overwhelming. In this work we calculate the BF from the period 1921 till 1993. By 1960 we see that the BF, due to better data gathering techniques and advances in technology, had reached a factor of greater than 100 to 1, making GR strongly preferable to NG and by 1990 the BF reached 1000:1. Ironically while BF had reached a state of near certainty even in 1960 rival theories of gravitation were on the rise - notably the Brans-Dicke (BD) scalar-tensor theory of gravity. The BD theory is postulated in such a way that for small positive values of a scalar parameter ω, the BF would favor GR while the BF would approach unity with certainty as ω grows larger, at which point either theory would be prefered, i.e., it is a theory that cannot lose. Does this mean Bayesian model selection needs to be overthrown? This points to the need for cogent prior information guided by physics and physical experiment.
A variational Bayesian approach for inverse problems with skew-t error distributions
NASA Astrophysics Data System (ADS)
Guha, Nilabja; Wu, Xiaoqing; Efendiev, Yalchin; Jin, Bangti; Mallick, Bani K.
2015-11-01
In this work, we develop a novel robust Bayesian approach to inverse problems with data errors following a skew-t distribution. A hierarchical Bayesian model is developed in the inverse problem setup. The Bayesian approach contains a natural mechanism for regularization in the form of a prior distribution, and a LASSO type prior distribution is used to strongly induce sparseness. We propose a variational type algorithm by minimizing the Kullback-Leibler divergence between the true posterior distribution and a separable approximation. The proposed method is illustrated on several two-dimensional linear and nonlinear inverse problems, e.g. Cauchy problem and permeability estimation problem.
An approach to quantifying the efficiency of a Bayesian filter
Technology Transfer Automated Retrieval System (TEKTRAN)
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation applications require that simplifying assumptions be made about the prior and posterior state distributions...
Bayesian modeling of differential gene expression.
Lewin, Alex; Richardson, Sylvia; Marshall, Clare; Glazier, Anne; Aitman, Tim
2006-03-01
We present a Bayesian hierarchical model for detecting differentially expressing genes that includes simultaneous estimation of array effects, and show how to use the output for choosing lists of genes for further investigation. We give empirical evidence that expression-level dependent array effects are needed, and explore different nonlinear functions as part of our model-based approach to normalization. The model includes gene-specific variances but imposes some necessary shrinkage through a hierarchical structure. Model criticism via posterior predictive checks is discussed. Modeling the array effects (normalization) simultaneously with differential expression gives fewer false positive results. To choose a list of genes, we propose to combine various criteria (for instance, fold change and overall expression) into a single indicator variable for each gene. The posterior distribution of these variables is used to pick the list of genes, thereby taking into account uncertainty in parameter estimates. In an application to mouse knockout data, Gene Ontology annotations over- and underrepresented among the genes on the chosen list are consistent with biological expectations.
A Nonparametric Bayesian Approach For Emission Tomography Reconstruction
NASA Astrophysics Data System (ADS)
Barat, Éric; Dautremer, Thomas
2007-11-01
We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution—normalized emission intensity of the spatial poisson process—is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on Rk (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM.
Application of the Bayesian dynamic survival model in medicine.
He, Jianghua; McGee, Daniel L; Niu, Xufeng
2010-02-10
The Bayesian dynamic survival model (BDSM), a time-varying coefficient survival model from the Bayesian prospective, was proposed in early 1990s but has not been widely used or discussed. In this paper, we describe the model structure of the BDSM and introduce two estimation approaches for BDSMs: the Markov Chain Monte Carlo (MCMC) approach and the linear Bayesian (LB) method. The MCMC approach estimates model parameters through sampling and is computationally intensive. With the newly developed geoadditive survival models and software BayesX, the BDSM is available for general applications. The LB approach is easier in terms of computations but it requires the prespecification of some unknown smoothing parameters. In a simulation study, we use the LB approach to show the effects of smoothing parameters on the performance of the BDSM and propose an ad hoc method for identifying appropriate values for those parameters. We also demonstrate the performance of the MCMC approach compared with the LB approach and a penalized partial likelihood method available in software R packages. A gastric cancer trial is utilized to illustrate the application of the BDSM. PMID:20014356
Application of the Bayesian dynamic survival model in medicine.
He, Jianghua; McGee, Daniel L; Niu, Xufeng
2010-02-10
The Bayesian dynamic survival model (BDSM), a time-varying coefficient survival model from the Bayesian prospective, was proposed in early 1990s but has not been widely used or discussed. In this paper, we describe the model structure of the BDSM and introduce two estimation approaches for BDSMs: the Markov Chain Monte Carlo (MCMC) approach and the linear Bayesian (LB) method. The MCMC approach estimates model parameters through sampling and is computationally intensive. With the newly developed geoadditive survival models and software BayesX, the BDSM is available for general applications. The LB approach is easier in terms of computations but it requires the prespecification of some unknown smoothing parameters. In a simulation study, we use the LB approach to show the effects of smoothing parameters on the performance of the BDSM and propose an ad hoc method for identifying appropriate values for those parameters. We also demonstrate the performance of the MCMC approach compared with the LB approach and a penalized partial likelihood method available in software R packages. A gastric cancer trial is utilized to illustrate the application of the BDSM.
Survey of Bayesian Models for Modelling of Stochastic Temporal Processes
Ng, B
2006-10-12
This survey gives an overview of popular generative models used in the modeling of stochastic temporal systems. In particular, this survey is organized into two parts. The first part discusses the discrete-time representations of dynamic Bayesian networks and dynamic relational probabilistic models, while the second part discusses the continuous-time representation of continuous-time Bayesian networks.
Yu, Jihnhee; Hutson, Alan D; Siddiqui, Adnan H; Kedron, Mary A
2016-02-01
In some small clinical trials, toxicity is not a primary endpoint; however, it often has dire effects on patients' quality of life and is even life-threatening. For such clinical trials, rigorous control of the overall incidence of adverse events is desirable, while simultaneously collecting safety information. In this article, we propose group sequential toxicity monitoring strategies to control overall toxicity incidents below a certain level as opposed to performing hypothesis testing, which can be incorporated into an existing study design based on the primary endpoint. We consider two sequential methods: a non-Bayesian approach in which stopping rules are obtained based on the 'future' probability of an excessive toxicity rate; and a Bayesian adaptation modifying the proposed non-Bayesian approach, which can use the information obtained at interim analyses. Through an extensive Monte Carlo study, we show that the Bayesian approach often provides better control of the overall toxicity rate than the non-Bayesian approach. We also investigate adequate toxicity estimation after the studies. We demonstrate the applicability of our proposed methods in controlling the symptomatic intracranial hemorrhage rate for treating acute ischemic stroke patients.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng
2007-01-01
The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…
Hierarchical Bayesian Models of Subtask Learning
ERIC Educational Resources Information Center
Anglim, Jeromy; Wynton, Sarah K. A.
2015-01-01
The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…
A Bayesian nonparametric meta-analysis model.
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G
2015-03-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall effect size, such models may be adequate, but for prediction, they surely are not if the effect-size distribution exhibits non-normal behavior. To address this issue, we propose a Bayesian nonparametric meta-analysis model, which can describe a wider range of effect-size distributions, including unimodal symmetric distributions, as well as skewed and more multimodal distributions. We demonstrate our model through the analysis of real meta-analytic data arising from behavioral-genetic research. We compare the predictive performance of the Bayesian nonparametric model against various conventional and more modern normal fixed-effects and random-effects models.
Liang, Shidong; Jia, Haifeng; Xu, Changqing; Xu, Te; Melching, Charles
2016-08-01
Facing increasingly serious water pollution, the Chinese government is changing the environmental management strategy from solely pollutant concentration control to a Total Maximum Daily Load (TMDL) program, and water quality models are increasingly being applied to determine the allowable pollutant load in the TMDL. Despite the frequent use of models, few studies have focused on how parameter uncertainty in water quality models affect the allowable pollutant loads in the TMDL program, particularly for complicated and high-dimension water quality models. Uncertainty analysis for such models is limited by time-consuming simulation and high-dimensionality and nonlinearity in parameter spaces. In this study, an allowable pollutant load calculation platform was established using the Environmental Fluid Dynamics Code (EFDC), which is a widely applied hydrodynamic-water quality model. A Bayesian approach, i.e. the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, which is a high-efficiency, multi-chain Markov Chain Monte Carlo (MCMC) method, was applied to assess the effects of parameter uncertainty on the water quality model simulations and its influence on the allowable pollutant load calculation in the TMDL program. Miyun Reservoir, which is the most important surface drinking water source for Beijing, suffers from eutrophication and was selected as a case study. The relations between pollutant loads and water quality indicators are obtained through a graphical method in the simulation platform. Ranges of allowable pollutant loads were obtained according to the results of parameter uncertainty analysis, i.e. Total Organic Carbon (TOC): 581.5-1030.6t·yr(-1); Total Phosphorus (TP): 23.3-31.0t·yr(-1); and Total Nitrogen (TN): 480-1918.0t·yr(-1). The wide ranges of allowable pollutant loads reveal the importance of parameter uncertainty analysis in a TMDL program for allowable pollutant load calculation and margin of safety (MOS) determination. The sources
Quantification Of Margins And Uncertainties: A Bayesian Approach (full Paper)
Wallstrom, Timothy C
2008-01-01
Quantification of Margins and Uncertainties (QMU) is 'a formalism for dealing with the reliability of complex technical systems, and the confidence which can be placed in estimates of that reliability.' (Eardleyet al, 2005). In this paper, we show how QMU may be interpreted in the framework of Bayesian statistical inference, using a probabilistic network. The Bayesian approach clarifies the probabilistic underpinnings of the formalism, and shows how the formalism can be used for deciSion-making.
NASA Astrophysics Data System (ADS)
Baresel, Björn; Bucher, Hugo; Brosse, Morgane; Bagherpour, Borhan; Schaltegger, Urs
2016-04-01
Chemical abrasion isotope dilution thermal ionization mass spectrometry (CA-ID-TIMS) U-Pb dating of single-zircon crystals is preferably applied to tephra beds intercalated in sedimentary sequences. By assuming that the zircon crystallization age closely approximate that of the volcanic eruption and ash deposition, U-Pb zircon geochronology is the preferred approach for dating mass extinction events (such as the Permian-Triassic boundary mass extinction) in the sedimentary record. As tephra from large volcanic eruptions is often transported over long distances, it additionally provide an invaluable tool for stratigraphic correlation across distant geologic sections. Therefore, the combination of high-precision zircon geochronology with apatite chemistry of the same tephra bed (so called apatite tephrochronology) provides a robust fingerprint of one particular volcanic eruption. In addition we provide coherent Bayesian model ages for the Permian-Triassic boundary (PTB) mass extinction, then compare it with PTB model ages at Meishan after Burgess et al. (2014). We will present new high-precision U-Pb zircon dates for a series of volcanic ash beds in deep- and shallow-marine Permian-Triassic sections in the Nanpanjiang Basin, South China. In addition, apatite crystals out of the same ash beds were analysed focusing on their halogen (F, Cl) and trace-element (e.g. Fe, Mg, REE) chemistry. We also show that Bayesian age models produce reproducible results from different geologic sections. On the basis of these data, including litho- and biostratigraphic correlations, we can precisely and accurately constrain the Permian-Triassic boundary in an equatorial marine setting, and correlate tephra beds over different sections and facies in the Nanpanjiang Basin independently from litho-, bio- or chemostratigraphic criteria. The results evidence that data produced in laboratories associated to the global EARTHTIME consortium can provide age information at the 0.05% level of 206
A Bayesian Hierarchical Approach to Regional Frequency Analysis of Extremes
NASA Astrophysics Data System (ADS)
Renard, B.
2010-12-01
Rainfall and runoff frequency analysis is a major issue for the hydrological community. The distribution of hydrological extremes varies in space and possibly in time. Describing and understanding this spatiotemporal variability are primary challenges to improve hazard quantification and risk assessment. This presentation proposes a general approach based on a Bayesian hierarchical model, following previous work by Cooley et al. [2007], Micevski [2007], Aryal et al. [2009] or Lima and Lall [2009; 2010]. Such a hierarchical model is made up of two levels: (1) a data level modeling the distribution of observations, and (2) a process level describing the fluctuation of the distribution parameters in space and possibly in time. At the first level of the model, at-site data (e.g., annual maxima series) are modeled with a chosen distribution (e.g., a GEV distribution). Since data from several sites are considered, the joint distribution of a vector of (spatial) observations needs to be derived. This is challenging because data are in general not spatially independent, especially for nearby sites. An elliptical copula is therefore used to formally account for spatial dependence between at-site data. This choice might be questionable in the context of extreme value distributions. However, it is motivated by its applicability in spatial highly dimensional problems, where the joint pdf of a vector of n observations is required to derive the likelihood function (with n possibly amounting to hundreds of sites). At the second level of the model, parameters of the chosen at-site distribution are then modeled by a Gaussian spatial process, whose mean may depend on covariates (e.g. elevation, distance to sea, weather pattern, time). In particular, this spatial process allows estimating parameters at ungauged sites, and deriving the predictive distribution of rainfall/runoff at every pixel/catchment of the studied domain. An application to extreme rainfall series from the French
Normativity, interpretation, and Bayesian models
Oaksford, Mike
2014-01-01
It has been suggested that evaluative normativity should be expunged from the psychology of reasoning. A broadly Davidsonian response to these arguments is presented. It is suggested that two distinctions, between different types of rationality, are more permeable than this argument requires and that the fundamental objection is to selecting theories that make the most rational sense of the data. It is argued that this is inevitable consequence of radical interpretation where understanding others requires assuming they share our own norms of reasoning. This requires evaluative normativity and it is shown that when asked to evaluate others’ arguments participants conform to rational Bayesian norms. It is suggested that logic and probability are not in competition and that the variety of norms is more limited than the arguments against evaluative normativity suppose. Moreover, the universality of belief ascription suggests that many of our norms are universal and hence evaluative. It is concluded that the union of evaluative normativity and descriptive psychology implicit in Davidson and apparent in the psychology of reasoning is a good thing. PMID:24860519
Bayesian Analysis of Nonlinear Structural Equation Models with Nonignorable Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum
2006-01-01
A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm is used to produce the joint Bayesian estimates of…
[A medical image semantic modeling based on hierarchical Bayesian networks].
Lin, Chunyi; Ma, Lihong; Yin, Junxun; Chen, Jianyu
2009-04-01
A semantic modeling approach for medical image semantic retrieval based on hierarchical Bayesian networks was proposed, in allusion to characters of medical images. It used GMM (Gaussian mixture models) to map low-level image features into object semantics with probabilities, then it captured high-level semantics through fusing these object semantics using a Bayesian network, so that it built a multi-layer medical image semantic model, aiming to enable automatic image annotation and semantic retrieval by using various keywords at different semantic levels. As for the validity of this method, we have built a multi-level semantic model from a small set of astrocytoma MRI (magnetic resonance imaging) samples, in order to extract semantics of astrocytoma in malignant degree. Experiment results show that this is a superior approach.
The need to assess large numbers of chemicals for their potential toxicities has resulted in increased emphasis on medium- and high-throughput in vitro screening approaches. For such approaches to be useful, efficient and reliable data analysis and hit detection methods are also ...
Bayesian approach to the detection problem in gravitational wave astronomy
Littenberg, Tyson B.; Cornish, Neil J.
2009-09-15
The analysis of data from gravitational wave detectors can be divided into three phases: search, characterization, and evaluation. The evaluation of the detection--determining whether a candidate event is astrophysical in origin or some artifact created by instrument noise--is a crucial step in the analysis. The ongoing analyses of data from ground-based detectors employ a frequentist approach to the detection problem. A detection statistic is chosen, for which background levels and detection efficiencies are estimated from Monte Carlo studies. This approach frames the detection problem in terms of an infinite collection of trials, with the actual measurement corresponding to some realization of this hypothetical set. Here we explore an alternative, Bayesian approach to the detection problem, that considers prior information and the actual data in hand. Our particular focus is on the computational techniques used to implement the Bayesian analysis. We find that the parallel tempered Markov chain Monte Carlo (PTMCMC) algorithm is able to address all three phases of the analysis in a coherent framework. The signals are found by locating the posterior modes, the model parameters are characterized by mapping out the joint posterior distribution, and finally, the model evidence is computed by thermodynamic integration. As a demonstration, we consider the detection problem of selecting between models describing the data as instrument noise, or instrument noise plus the signal from a single compact galactic binary. The evidence ratios, or Bayes factors, computed by the PTMCMC algorithm are found to be in close agreement with those computed using a reversible jump Markov chain Monte Carlo algorithm.
A Bayesian optimization approach for wind farm power maximization
NASA Astrophysics Data System (ADS)
Park, Jinkyoo; Law, Kincho H.
2015-03-01
The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective was to study alternative models for genetic analyses of carcass traits assessed by ultrasonography in Guzerá cattle. Data from 947 measurements (655 animals) of Rib-eye area (REA), rump fat thickness (RFT) and backfat thickness (BFT) were used. Finite polygenic models (FPM), infinitesi...
A Bayesian Networks approach to Operational Risk
NASA Astrophysics Data System (ADS)
Aquaro, V.; Bardoscia, M.; Bellotti, R.; Consiglio, A.; De Carlo, F.; Ferri, G.
2010-04-01
A system for Operational Risk management based on the computational paradigm of Bayesian Networks is presented. The algorithm allows the construction of a Bayesian Network targeted for each bank and takes into account in a simple and realistic way the correlations among different processes of the bank. The internal losses are averaged over a variable time horizon, so that the correlations at different times are removed, while the correlations at the same time are kept: the averaged losses are thus suitable to perform the learning of the network topology and parameters; since the main aim is to understand the role of the correlations among the losses, the assessments of domain experts are not used. The algorithm has been validated on synthetic time series. It should be stressed that the proposed algorithm has been thought for the practical implementation in a mid or small sized bank, since it has a small impact on the organizational structure of a bank and requires an investment in human resources which is limited to the computational area.
A Bayesian approach to optimizing cryopreservation protocols.
Sambu, Sammy
2015-01-01
Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors) as preliminary meta-data, a decision tree learning analysis (DTLA) was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents), loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC) method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach.
Cho, Kang Su; Jung, Hae Do; Ham, Won Sik; Chung, Doo Yong; Kang, Yong Jin; Jang, Won Sik; Kwon, Jong Kyou; Choi, Young Deuk; Lee, Joo Yong
2015-01-01
Objectives To investigate whether skin-to-stone distance (SSD), which remains controversial in patients with ureter stones, can be a predicting factor for one session success following extracorporeal shock wave lithotripsy (ESWL) in patients with upper ureter stones. Patients and Methods We retrospectively reviewed the medical records of 1,519 patients who underwent their first ESWL between January 2005 and December 2013. Among these patients, 492 had upper ureter stones that measured 4–20 mm and were eligible for our analyses. Maximal stone length, mean stone density (HU), and SSD were determined on pretreatment non-contrast computed tomography (NCCT). For subgroup analyses, patients were divided into four groups. Group 1 consisted of patients with SSD<25th percentile, group 2 consisted of patients with SSD in the 25th to 50th percentile, group 3 patients had SSD in the 50th to 75th percentile, and group 4 patients had SSD≥75th percentile. Results In analyses of group 2 patients versus others, there were no statistical differences in mean age, stone length and density. However, the one session success rate in group 2 was higher than other groups (77.9% vs. 67.0%; P = 0.032). The multivariate logistic regression model revealed that shorter stone length, lower stone density, and the group 2 SSD were positive predictors for successful outcomes in ESWL. Using the Bayesian model-averaging approach, longer stone length, lower stone density, and group 2 SSD can be also positive predictors for successful outcomes following ESWL. Conclusions Our data indicate that a group 2 SSD of approximately 10 cm is a positive predictor for success following ESWL. PMID:26659086
Bayesian model selection analysis of WMAP3
Parkinson, David; Mukherjee, Pia; Liddle, Andrew R.
2006-06-15
We present a Bayesian model selection analysis of WMAP3 data using our code CosmoNest. We focus on the density perturbation spectral index n{sub S} and the tensor-to-scalar ratio r, which define the plane of slow-roll inflationary models. We find that while the Bayesian evidence supports the conclusion that n{sub S}{ne}1, the data are not yet powerful enough to do so at a strong or decisive level. If tensors are assumed absent, the current odds are approximately 8 to 1 in favor of n{sub S}{ne}1 under our assumptions, when WMAP3 data is used together with external data sets. WMAP3 data on its own is unable to distinguish between the two models. Further, inclusion of r as a parameter weakens the conclusion against the Harrison-Zel'dovich case (n{sub S}=1, r=0), albeit in a prior-dependent way. In appendices we describe the CosmoNest code in detail, noting its ability to supply posterior samples as well as to accurately compute the Bayesian evidence. We make a first public release of CosmoNest, now available at www.cosmonest.org.
A Bayesian nonlinear mixed-effects disease progression model
Kim, Seongho; Jang, Hyejeong; Wu, Dongfeng; Abrams, Judith
2016-01-01
A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation method that does not consider random-effects from age. Using the developed models, we obtain not only age-specific individual-level distributions, but also population-level distributions of sensitivity, sojourn time and transition probability. PMID:26798562
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
Bayesian analysis of botanical epidemics using stochastic compartmental models.
Gibson, G J; Kleczkowski, A; Gilligan, C A
2004-08-17
A stochastic model for an epidemic, incorporating susceptible, latent, and infectious states, is developed. The model represents primary and secondary infection rates and a time-varying host susceptibility with applications to a wide range of epidemiological systems. A Markov chain Monte Carlo algorithm is presented that allows the model to be fitted to experimental observations within a Bayesian framework. The approach allows the uncertainty in unobserved aspects of the process to be represented in the parameter posterior densities. The methods are applied to experimental observations of damping-off of radish (Raphanus sativus) caused by the fungal pathogen Rhizoctonia solani, in the presence and absence of the antagonistic fungus Trichoderma viride, a biological control agent that has previously been shown to affect the rate of primary infection by using a maximum-likelihood estimate for a simpler model with no allowance for a latent period. Using the Bayesian analysis, we are able to estimate the latent period from population data, even when there is uncertainty in discriminating infectious from latently infected individuals in data collection. We also show that the inference that T. viride can control primary, but not secondary, infection is robust to inclusion of the latent period in the model, although the absolute values of the parameters change. Some refinements and potential difficulties with the Bayesian approach in this context, when prior information on parameters is lacking, are discussed along with broader applications of the methods to a wide range of epidemiological systems.
A bayesian approach to laboratory utilization management
Hauser, Ronald G.; Jackson, Brian R.; Shirts, Brian H.
2015-01-01
Background: Laboratory utilization management describes a process designed to increase healthcare value by altering requests for laboratory services. A typical approach to monitor and prioritize interventions involves audits of laboratory orders against specific criteria, defined as rule-based laboratory utilization management. This approach has inherent limitations. First, rules are inflexible. They adapt poorly to the ambiguity of medical decision-making. Second, rules judge the context of a decision instead of the patient outcome allowing an order to simultaneously save a life and break a rule. Third, rules can threaten physician autonomy when used in a performance evaluation. Methods: We developed an alternative to rule-based laboratory utilization. The core idea comes from a formula used in epidemiology to estimate disease prevalence. The equation relates four terms: the prevalence of disease, the proportion of positive tests, test sensitivity and test specificity. When applied to a laboratory utilization audit, the formula estimates the prevalence of disease (pretest probability [PTP]) in the patients tested. The comparison of PTPs among different providers, provider groups, or patient cohorts produces an objective evaluation of laboratory requests. We demonstrate the model in a review of tests for enterovirus (EV) meningitis. Results: The model identified subpopulations within the cohort with a low prevalence of disease. These low prevalence groups shared demographic and seasonal factors known to protect against EV meningitis. This suggests too many orders occurred from patients at low risk for EV. Conclusion: We introduce a new method for laboratory utilization management programs to audit laboratory services. PMID:25774321
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Accurate Model Selection of Relaxed Molecular Clocks in Bayesian Phylogenetics
Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J.; Suchard, Marc A.; Lemey, Philippe
2013-01-01
Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike’s information criterion through Markov chain Monte Carlo (AICM), in Bayesian model selection of demographic and molecular clock models. Almost simultaneously, a Bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets. PMID:23090976
AutoClass: A Bayesian Approach to Classification
NASA Technical Reports Server (NTRS)
Stutz, John; Cheeseman, Peter; Hanson, Robin; Taylor, Will; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
We describe a Bayesian approach to the untutored discovery of classes in a set of cases, sometimes called finite mixture separation or clustering. The main difference between clustering and our approach is that we search for the "best" set of class descriptions rather than grouping the cases themselves. We describe our classes in terms of a probability distribution or density function, and the locally maximal posterior probability valued function parameters. We rate our classifications with an approximate joint probability of the data and functional form, marginalizing over the parameters. Approximation is necessitated by the computational complexity of the joint probability. Thus, we marginalize w.r.t. local maxima in the parameter space. We discuss the rationale behind our approach to classification. We give the mathematical development for the basic mixture model and describe the approximations needed for computational tractability. We instantiate the basic model with the discrete Dirichlet distribution and multivariant Gaussian density likelihoods. Then we show some results for both constructed and actual data.
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
Bayesian residual analysis for beta-binomial regression models
NASA Astrophysics Data System (ADS)
Pires, Rubiane Maria; Diniz, Carlos Alberto Ribeiro
2012-10-01
The beta-binomial regression model is an alternative model to the sum of any sequence of equicorrelated binary variables with common probability of success p. In this work a Bayesian perspective of this model is presented considering different link functions and different correlation structures. A general Bayesian residual analysis for this model, a issue which is often neglected in Bayesian analysis, using the residuals based on the predicted values obtained by the conditional predictive ordinate [1], the residuals based on the posterior distribution of the model parameters [2] and the Bayesian deviance residual [3] are presented in order to check the assumptions in the model.
Bayesian Kinematic Finite Fault Source Models (Invited)
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.
2010-12-01
Finite fault earthquake source models are inherently under-determined: there is no unique solution to the inverse problem of determining the rupture history at depth as a function of time and space when our data are only limited observations at the Earth's surface. Traditional inverse techniques rely on model constraints and regularization to generate one model from the possibly broad space of all possible solutions. However, Bayesian methods allow us to determine the ensemble of all possible source models which are consistent with the data and our a priori assumptions about the physics of the earthquake source. Until now, Bayesian techniques have been of limited utility because they are computationally intractable for problems with as many free parameters as kinematic finite fault models. We have developed a methodology called Cascading Adaptive Tempered Metropolis In Parallel (CATMIP) which allows us to sample very high-dimensional problems in a parallel computing framework. The CATMIP algorithm combines elements of simulated annealing and genetic algorithms with the Metropolis algorithm to dynamically optimize the algorithm's efficiency as it runs. We will present synthetic performance tests of finite fault models made with this methodology as well as a kinematic source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. This earthquake was well recorded by multiple ascending and descending interferograms and a network of high-rate GPS stations whose records can be used as near-field seismograms.
Bayesian approach to time-resolved tomography.
Myers, Glenn R; Geleta, Matthew; Kingston, Andrew M; Recur, Benoit; Sheppard, Adrian P
2015-07-27
Conventional X-ray micro-computed tomography (μCT) is unable to meet the need for real-time, high-resolution, time-resolved imaging of multi-phase fluid flow. High signal-to-noise-ratio (SNR) data acquisition is too slow and results in motion artefacts in the images, while fast acquisition is too noisy and results in poor image contrast. We present a Bayesian framework for time-resolved tomography that uses priors to drastically reduce the required amount of experiment data. This enables high-quality time-resolved imaging through a data acquisition protocol that is both rapid and high SNR. Here we show that the framework: (i) encompasses our previous, algorithms for imaging two-phase flow as limiting cases; (ii) produces more accurate results from imperfect (i.e. real) data, where it can be compared to our previous work; and (iii) is generalisable to previously intractable systems, such as three-phase flow. PMID:26367664
A Bayesian approach to extracting meaning from system behavior
Dress, W.B.
1998-08-01
The modeling relation and its reformulation to include the semiotic hierarchy is essential for the understanding, control, and successful re-creation of natural systems. This presentation will argue for a careful application of Rosen`s modeling relationship to the problems of intelligence and autonomy in natural and artificial systems. To this end, the authors discuss the essential need for a correct theory of induction, learning, and probability; and suggest that modern Bayesian probability theory, developed by Cox, Jaynes, and others, can adequately meet such demands, especially on the operational level of extracting meaning from observations. The methods of Bayesian and maximum Entropy parameter estimation have been applied to measurements of system observables to directly infer the underlying differential equations generating system behavior. This approach by-passes the usual method of parameter estimation based on assuming a functional form for the observable and then estimating the parameters that would lead to the particular observed behavior. The computational savings is great since only location parameters enter into the maximum-entropy calculations; this innovation finesses the need for nonlinear parameters altogether. Such an approach more directly extracts the semantics inherent in a given system by going to the root of system meaning as expressed by abstract form or shape, rather than in syntactic particulars, such as signal amplitude and phase. Examples will be shown how the form of a system can be followed while ignoring unnecessary details. In this sense, the authors are observing the meaning of the words rather than being concerned with their particular expression or language. For the present discussion, empirical models are embodied by the differential equations underlying, producing, or describing the behavior of a process as measured or tracked by a particular variable set--the observables. The a priori models are probability structures that
Wan, Rongrong; Cai, Shanshan; Li, Hengpeng; Yang, Guishan; Li, Zhaofu; Nie, Xiaofei
2014-01-15
Lake eutrophication has become a very serious environmental problem in China. If water pollution is to be controlled and ultimately eliminated, it is essential to understand how human activities affect surface water quality. A recently developed technique using the Bayesian hierarchical linear regression model revealed the effects of land use and land cover (LULC) on stream water quality at a watershed scale. Six LULC categories combined with watershed characteristics, including size, slope, and permeability were the variables that were studied. The pollutants of concern were nutrient concentrations of total nitrogen (TN) and total phosphorus (TP), common pollutants found in eutrophication. The monthly monitoring data at 41 sites in the Xitiaoxi Watershed, China during 2009-2010 were used for model demonstration. The results showed that the relationships between LULC and stream water quality are so complicated that the effects are varied over large areas. The models suggested that urban and agricultural land are important sources of TN and TP concentrations, while rural residential land is one of the major sources of TN. Certain agricultural practices (excessive fertilizer application) result in greater concentrations of nutrients in paddy fields, artificial grasslands, and artificial woodlands. This study suggests that Bayesian hierarchical modeling is a powerful tool for examining the complicated relationships between land use and water quality on different scales, and for developing land use and water management policies.
Wan, Rongrong; Cai, Shanshan; Li, Hengpeng; Yang, Guishan; Li, Zhaofu; Nie, Xiaofei
2014-01-15
Lake eutrophication has become a very serious environmental problem in China. If water pollution is to be controlled and ultimately eliminated, it is essential to understand how human activities affect surface water quality. A recently developed technique using the Bayesian hierarchical linear regression model revealed the effects of land use and land cover (LULC) on stream water quality at a watershed scale. Six LULC categories combined with watershed characteristics, including size, slope, and permeability were the variables that were studied. The pollutants of concern were nutrient concentrations of total nitrogen (TN) and total phosphorus (TP), common pollutants found in eutrophication. The monthly monitoring data at 41 sites in the Xitiaoxi Watershed, China during 2009-2010 were used for model demonstration. The results showed that the relationships between LULC and stream water quality are so complicated that the effects are varied over large areas. The models suggested that urban and agricultural land are important sources of TN and TP concentrations, while rural residential land is one of the major sources of TN. Certain agricultural practices (excessive fertilizer application) result in greater concentrations of nutrients in paddy fields, artificial grasslands, and artificial woodlands. This study suggests that Bayesian hierarchical modeling is a powerful tool for examining the complicated relationships between land use and water quality on different scales, and for developing land use and water management policies. PMID:24342905
Du, Qingyun; Zhang, Mingxiao; Li, Yayan; Luan, Hui; Liang, Shi; Ren, Fu
2016-01-01
Incorporating the information of hypertension, this paper applies Bayesian multi-disease analysis to model the spatial patterns of Ischemic Heart Disease (IHD) risks. Patterns of harmful alcohol intake (HAI) and overweight/obesity are also modelled as they are common risk factors contributing to both IHD and hypertension. The hospitalization data of IHD and hypertension in 2012 were analyzed with three Bayesian multi-disease models at the sub-district level of Shenzhen. Results revealed that the IHD high-risk cluster shifted slightly north-eastward compared with the IHD Standardized Hospitalization Ratio (SHR). Spatial variations of overweight/obesity and HAI were found to contribute most to the IHD patterns. Identified patterns of IHD risk would benefit IHD integrated prevention. Spatial patterns of overweight/obesity and HAI could supplement the current disease surveillance system by providing information about small-area level risk factors, and thus benefit integrated prevention of related chronic diseases. Middle southern Shenzhen, where high risk of IHD, overweight/obesity, and HAI are present, should be prioritized for interventions, including alcohol control, innovative healthy diet toolkit distribution, insurance system revision, and community-based chronic disease intervention. Related health resource planning is also suggested to focus on these areas first. PMID:27104551
Du, Qingyun; Zhang, Mingxiao; Li, Yayan; Luan, Hui; Liang, Shi; Ren, Fu
2016-04-01
Incorporating the information of hypertension, this paper applies Bayesian multi-disease analysis to model the spatial patterns of Ischemic Heart Disease (IHD) risks. Patterns of harmful alcohol intake (HAI) and overweight/obesity are also modelled as they are common risk factors contributing to both IHD and hypertension. The hospitalization data of IHD and hypertension in 2012 were analyzed with three Bayesian multi-disease models at the sub-district level of Shenzhen. Results revealed that the IHD high-risk cluster shifted slightly north-eastward compared with the IHD Standardized Hospitalization Ratio (SHR). Spatial variations of overweight/obesity and HAI were found to contribute most to the IHD patterns. Identified patterns of IHD risk would benefit IHD integrated prevention. Spatial patterns of overweight/obesity and HAI could supplement the current disease surveillance system by providing information about small-area level risk factors, and thus benefit integrated prevention of related chronic diseases. Middle southern Shenzhen, where high risk of IHD, overweight/obesity, and HAI are present, should be prioritized for interventions, including alcohol control, innovative healthy diet toolkit distribution, insurance system revision, and community-based chronic disease intervention. Related health resource planning is also suggested to focus on these areas first. PMID:27104551
Du, Qingyun; Zhang, Mingxiao; Li, Yayan; Luan, Hui; Liang, Shi; Ren, Fu
2016-04-20
Incorporating the information of hypertension, this paper applies Bayesian multi-disease analysis to model the spatial patterns of Ischemic Heart Disease (IHD) risks. Patterns of harmful alcohol intake (HAI) and overweight/obesity are also modelled as they are common risk factors contributing to both IHD and hypertension. The hospitalization data of IHD and hypertension in 2012 were analyzed with three Bayesian multi-disease models at the sub-district level of Shenzhen. Results revealed that the IHD high-risk cluster shifted slightly north-eastward compared with the IHD Standardized Hospitalization Ratio (SHR). Spatial variations of overweight/obesity and HAI were found to contribute most to the IHD patterns. Identified patterns of IHD risk would benefit IHD integrated prevention. Spatial patterns of overweight/obesity and HAI could supplement the current disease surveillance system by providing information about small-area level risk factors, and thus benefit integrated prevention of related chronic diseases. Middle southern Shenzhen, where high risk of IHD, overweight/obesity, and HAI are present, should be prioritized for interventions, including alcohol control, innovative healthy diet toolkit distribution, insurance system revision, and community-based chronic disease intervention. Related health resource planning is also suggested to focus on these areas first.
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
NASA Astrophysics Data System (ADS)
Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.
2011-12-01
A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.
A Nonparametric Bayesian Model for Nested Clustering.
Lee, Juhee; Müller, Peter; Zhu, Yitan; Ji, Yuan
2016-01-01
We propose a nonparametric Bayesian model for clustering where clusters of experimental units are determined by a shared pattern of clustering another set of experimental units. The proposed model is motivated by the analysis of protein activation data, where we cluster proteins such that all proteins in one cluster give rise to the same clustering of patients. That is, we define clusters of proteins by the way that patients group with respect to the corresponding protein activations. This is in contrast to (almost) all currently available models that use shared parameters in the sampling model to define clusters. This includes in particular model based clustering, Dirichlet process mixtures, product partition models, and more. We show results for two typical biostatistical inference problems that give rise to clustering. PMID:26519174
Model feedback in Bayesian propensity score estimation.
Zigler, Corwin M; Watts, Krista; Yeh, Robert W; Wang, Yun; Coull, Brent A; Dominici, Francesca
2013-03-01
Methods based on the propensity score comprise one set of valuable tools for comparative effectiveness research and for estimating causal effects more generally. These methods typically consist of two distinct stages: (1) a propensity score stage where a model is fit to predict the propensity to receive treatment (the propensity score), and (2) an outcome stage where responses are compared in treated and untreated units having similar values of the estimated propensity score. Traditional techniques conduct estimation in these two stages separately; estimates from the first stage are treated as fixed and known for use in the second stage. Bayesian methods have natural appeal in these settings because separate likelihoods for the two stages can be combined into a single joint likelihood, with estimation of the two stages carried out simultaneously. One key feature of joint estimation in this context is "feedback" between the outcome stage and the propensity score stage, meaning that quantities in a model for the outcome contribute information to posterior distributions of quantities in the model for the propensity score. We provide a rigorous assessment of Bayesian propensity score estimation to show that model feedback can produce poor estimates of causal effects absent strategies that augment propensity score adjustment with adjustment for individual covariates. We illustrate this phenomenon with a simulation study and with a comparative effectiveness investigation of carotid artery stenting versus carotid endarterectomy among 123,286 Medicare beneficiaries hospitlized for stroke in 2006 and 2007. PMID:23379793
Zhao, Xiaodong; Pelegri, Assimina A
2016-04-01
Biomechanical imaging techniques based on acoustic radiation force (ARF) have been developed to characterize the viscoelasticity of soft tissue by measuring the motion excited by ARF non-invasively. The unknown stress distribution in the region of excitation limits an accurate inverse characterization of soft tissue viscoelasticity, and single degree-of-freedom simplified models have been applied to solve the inverse problem approximately. In this study, the ARF-induced creep imaging is employed to estimate the time constant of a Voigt viscoelastic tissue model, and an inverse finite element (FE) characterization procedure based on a Bayesian formulation is presented. The Bayesian approach aims to estimate a reasonable quantification of the probability distributions of soft tissue mechanical properties in the presence of measurement noise and model parameter uncertainty. Gaussian process metamodeling is applied to provide a fast statistical approximation based on a small number of computationally expensive FE model runs. Numerical simulation results demonstrate that the Bayesian approach provides an efficient and practical estimation of the probability distributions of time constant in the ARF-induced creep imaging. In a comparison study with the single degree of freedom models, the Bayesian approach with FE models improves the estimation results even in the presence of large uncertainty levels of the model parameters.
Zhao, Xiaodong; Pelegri, Assimina A
2016-04-01
Biomechanical imaging techniques based on acoustic radiation force (ARF) have been developed to characterize the viscoelasticity of soft tissue by measuring the motion excited by ARF non-invasively. The unknown stress distribution in the region of excitation limits an accurate inverse characterization of soft tissue viscoelasticity, and single degree-of-freedom simplified models have been applied to solve the inverse problem approximately. In this study, the ARF-induced creep imaging is employed to estimate the time constant of a Voigt viscoelastic tissue model, and an inverse finite element (FE) characterization procedure based on a Bayesian formulation is presented. The Bayesian approach aims to estimate a reasonable quantification of the probability distributions of soft tissue mechanical properties in the presence of measurement noise and model parameter uncertainty. Gaussian process metamodeling is applied to provide a fast statistical approximation based on a small number of computationally expensive FE model runs. Numerical simulation results demonstrate that the Bayesian approach provides an efficient and practical estimation of the probability distributions of time constant in the ARF-induced creep imaging. In a comparison study with the single degree of freedom models, the Bayesian approach with FE models improves the estimation results even in the presence of large uncertainty levels of the model parameters. PMID:26255624
Bayesian model comparison of solar flare spectra
NASA Astrophysics Data System (ADS)
Ireland, J.; Holman, G.
2012-12-01
The detailed understanding of solar flares requires an understanding of the physics of accelerated electrons, since electrons carry a large fraction of the total energy released in a flare. Hard X-ray energy flux spectral observations of solar flares can be fit with different parameterized models of the interaction of the flare-accelerated electrons with the solar plasma. Each model describes different possible physical effects that may occur in solar flares. Bayesian model comparison provides a technique for assessing which model best describes the data. The advantage of this technique over others is that it can fully account for the different number and type of parameters in each model. We demonstrate this using Ramaty High Energy Solar Spectroscopic Imager (RHESSI) spectral data from the GOES (Geostationary Operational Environmental Satellite) X4.8 flare of 23-July-2002. We suggest that the observed spectrum can be reproduced using two different parameterized models of the flare electron content. The first model assumes that the flare-accelerated electron spectrum consisting of a single power law with a fixed low energy cutoff assumed to be below the range of fitted X-ray energies, interacting with a non-uniformly ionized target. The second model assumes that the flare-accelerated electron spectrum has a broken power law and a low energy cutoff, which interacts with a fully ionized target plasma. The low energy cutoff in this model is a parameter used in fitting the data. We will introduce and use Bayesian model comparison techniques to decide which model best explains the observed data. This work is funded by the NASA Solar and Heliospheric Physics program.
Bayesian Lasso for Semiparametric Structural Equation Models
Guo, Ruixin; Zhu, Hongtu; Chow, Sy-Miin; Ibrahim, Joseph G.
2011-01-01
Summary There has been great interest in developing nonlinear structural equation models and associated statistical inference procedures, including estimation and model selection methods. In this paper a general semiparametric structural equation model (SSEM) is developed in which the structural equation is composed of nonparametric functions of exogenous latent variables and fixed covariates on a set of latent endogenous variables. A basis representation is used to approximate these nonparametric functions in the structural equation and the Bayesian Lasso method coupled with a Markov Chain Monte Carlo (MCMC) algorithm is used for simultaneous estimation and model selection. The proposed method is illustrated using a simulation study and data from the Affective Dynamics and Individual Differences (ADID) study. Results demonstrate that our method can accurately estimate the unknown parameters and correctly identify the true underlying model. PMID:22376150
A BAYESIAN STATISTICAL APPROACH FOR THE EVALUATION OF CMAQ
Bayesian statistical methods are used to evaluate Community Multiscale Air Quality (CMAQ) model simulations of sulfate aerosol over a section of the eastern US for 4-week periods in summer and winter 2001. The observed data come from two U.S. Environmental Protection Agency data ...
A Hierarchical Bayesian Model for Crowd Emotions
Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366
A Hierarchical Bayesian Model for Crowd Emotions.
Urizar, Oscar J; Baig, Mirza S; Barakova, Emilia I; Regazzoni, Carlo S; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366
A Hierarchical Bayesian Model for Crowd Emotions.
Urizar, Oscar J; Baig, Mirza S; Barakova, Emilia I; Regazzoni, Carlo S; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds.
Defining statistical perceptions with an empirical Bayesian approach
NASA Astrophysics Data System (ADS)
Tajima, Satohiro
2013-04-01
Extracting statistical structures (including textures or contrasts) from a natural stimulus is a central challenge in both biological and engineering contexts. This study interprets the process of statistical recognition in terms of hyperparameter estimations and free-energy minimization procedures with an empirical Bayesian approach. This mathematical interpretation resulted in a framework for relating physiological insights in animal sensory systems to the functional properties of recognizing stimulus statistics. We applied the present theoretical framework to two typical models of natural images that are encoded by a population of simulated retinal neurons, and demonstrated that the resulting cognitive performances could be quantified with the Fisher information measure. The current enterprise yielded predictions about the properties of human texture perception, suggesting that the perceptual resolution of image statistics depends on visual field angles, internal noise, and neuronal information processing pathways, such as the magnocellular, parvocellular, and koniocellular systems. Furthermore, the two conceptually similar natural-image models were found to yield qualitatively different predictions, striking a note of warning against confusing the two models when describing a natural image.
Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G
2014-01-01
Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. Key Points An OFAT sensitivity analysis of sediment fingerprinting mixing models is conducted Bayesian models display high sensitivity to error assumptions and structural choices Source apportionment results differ between Bayesian and frequentist approaches PMID
Bayesian inference and model comparison for metallic fatigue data
NASA Astrophysics Data System (ADS)
Babuška, Ivo; Sawlan, Zaid; Scavino, Marco; Szabó, Barna; Tempone, Raúl
2016-06-01
In this work, we present a statistical treatment of stress-life (S-N) data drawn from a collection of records of fatigue experiments that were performed on 75S-T6 aluminum alloys. Our main objective is to predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to S-N data. To this purpose, we consider fatigue-limit models and random fatigue-limit models that are specially designed to allow the treatment of the run-outs (right-censored data). We first fit the models to the data by maximum likelihood methods and estimate the quantiles of the life distribution of the alloy specimen. To assess the robustness of the estimation of the quantile functions, we obtain bootstrap confidence bands by stratified resampling with respect to the cycle ratio. We then compare and rank the models by classical measures of fit based on information criteria. We also consider a Bayesian approach that provides, under the prior distribution of the model parameters selected by the user, their simulation-based posterior distributions. We implement and apply Bayesian model comparison methods, such as Bayes factor ranking and predictive information criteria based on cross-validation techniques under various a priori scenarios.
A Bayesian approach for structure learning in oscillating regulatory networks
Trejo Banos, Daniel; Millar, Andrew J.; Sanguinetti, Guido
2015-01-01
Motivation: Oscillations lie at the core of many biological processes, from the cell cycle, to circadian oscillations and developmental processes. Time-keeping mechanisms are essential to enable organisms to adapt to varying conditions in environmental cycles, from day/night to seasonal. Transcriptional regulatory networks are one of the mechanisms behind these biological oscillations. However, while identifying cyclically expressed genes from time series measurements is relatively easy, determining the structure of the interaction network underpinning the oscillation is a far more challenging problem. Results: Here, we explicitly leverage the oscillatory nature of the transcriptional signals and present a method for reconstructing network interactions tailored to this special but important class of genetic circuits. Our method is based on projecting the signal onto a set of oscillatory basis functions using a Discrete Fourier Transform. We build a Bayesian Hierarchical model within a frequency domain linear model in order to enforce sparsity and incorporate prior knowledge about the network structure. Experiments on real and simulated data show that the method can lead to substantial improvements over competing approaches if the oscillatory assumption is met, and remains competitive also in cases it is not. Availability: DSS, experiment scripts and data are available at http://homepages.inf.ed.ac.uk/gsanguin/DSS.zip. Contact: d.trejo-banos@sms.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26177966
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology. PMID:23687472
A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2012-01-01
A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for…
Predicting coastal cliff erosion using a Bayesian probabilistic model
Hapke, C.; Plant, N.
2010-01-01
Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70-90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale. ?? 2010.
Bayesian Models of Graphs, Arrays and Other Exchangeable Random Structures.
Orbanz, Peter; Roy, Daniel M
2015-02-01
The natural habitat of most Bayesian methods is data represented by exchangeable sequences of observations, for which de Finetti's theorem provides the theoretical foundation. Dirichlet process clustering, Gaussian process regression, and many other parametric and nonparametric Bayesian models fall within the remit of this framework; many problems arising in modern data analysis do not. This article provides an introduction to Bayesian models of graphs, matrices, and other data that can be modeled by random structures. We describe results in probability theory that generalize de Finetti's theorem to such data and discuss their relevance to nonparametric Bayesian modeling. With the basic ideas in place, we survey example models available in the literature; applications of such models include collaborative filtering, link prediction, and graph and network analysis. We also highlight connections to recent developments in graph theory and probability, and sketch the more general mathematical foundation of Bayesian methods for other types of data beyond sequences and arrays. PMID:26353253
A Bayesian Analysis of Finite Mixtures in the LISREL Model.
ERIC Educational Resources Information Center
Zhu, Hong-Tu; Lee, Sik-Yum
2001-01-01
Proposes a Bayesian framework for estimating finite mixtures of the LISREL model. The model augments the observed data of the manifest variables with the latent variables and allocation variables and uses the Gibbs sampler to obtain the Bayesian solution. Discusses other associated statistical inferences. (SLD)
Assessing global vegetation activity using spatio-temporal Bayesian modelling
NASA Astrophysics Data System (ADS)
Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.
2016-04-01
This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support
Polygenic Modeling with Bayesian Sparse Linear Mixed Models
Zhou, Xiang; Carbonetto, Peter; Stephens, Matthew
2013-01-01
Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a “Bayesian sparse linear mixed model” (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html. PMID:23408905
NASA Astrophysics Data System (ADS)
Mendes, B. S.; Draper, D.
2008-12-01
The issue of model uncertainty and model choice is central in any groundwater modeling effort [Neuman and Wierenga, 2003]; among the several approaches to the problem we favour using Bayesian statistics because it is a method that integrates in a natural way uncertainties (arising from any source) and experimental data. In this work, we experiment with several Bayesian approaches to model choice, focusing primarily on demonstrating the usefulness of the Reversible Jump Markov Chain Monte Carlo (RJMCMC) simulation method [Green, 1995]; this is an extension of the now- common MCMC methods. Standard MCMC techniques approximate posterior distributions for quantities of interest, often by creating a random walk in parameter space; RJMCMC allows the random walk to take place between parameter spaces with different dimensionalities. This fact allows us to explore state spaces that are associated with different deterministic models for experimental data. Our work is exploratory in nature; we restrict our study to comparing two simple transport models applied to a data set gathered to estimate the breakthrough curve for a tracer compound in groundwater. One model has a mean surface based on a simple advection dispersion differential equation; the second model's mean surface is also governed by a differential equation but in two dimensions. We focus on artificial data sets (in which truth is known) to see if model identification is done correctly, but we also address the issues of over and under-paramerization, and we compare RJMCMC's performance with other traditional methods for model selection and propagation of model uncertainty, including Bayesian model averaging, BIC and DIC.References Neuman and Wierenga (2003). A Comprehensive Strategy of Hydrogeologic Modeling and Uncertainty Analysis for Nuclear Facilities and Sites. NUREG/CR-6805, Division of Systems Analysis and Regulatory Effectiveness Office of Nuclear Regulatory Research, U. S. Nuclear Regulatory Commission
A sensorimotor paradigm for Bayesian model selection.
Genewein, Tim; Braun, Daniel A
2012-01-01
Sensorimotor control is thought to rely on predictive internal models in order to cope efficiently with uncertain environments. Recently, it has been shown that humans not only learn different internal models for different tasks, but that they also extract common structure between tasks. This raises the question of how the motor system selects between different structures or models, when each model can be associated with a range of different task-specific parameters. Here we design a sensorimotor task that requires subjects to compensate visuomotor shifts in a three-dimensional virtual reality setup, where one of the dimensions can be mapped to a model variable and the other dimension to the parameter variable. By introducing probe trials that are neutral in the parameter dimension, we can directly test for model selection. We found that model selection procedures based on Bayesian statistics provided a better explanation for subjects' choice behavior than simple non-probabilistic heuristics. Our experimental design lends itself to the general study of model selection in a sensorimotor context as it allows to separately query model and parameter variables from subjects. PMID:23125827
Detecting qualitative interaction: a Bayesian approach.
Bayman, Emine Ozgür; Chaloner, Kathryn; Cowles, Mary Kathryn
2010-02-20
Differences in treatment effects between centers in a multi-center trial may be important. These differences represent treatment by subgroup interaction. Peto defines qualitative interaction (QI) to occur when the simple treatment effect in one subgroup has a different sign than in another subgroup: this interaction is important. Interaction where the treatment effects are of the same sign in all subgroups is called quantitative and is often not important because the treatment recommendation is identical in all cases. A hierarchical model is used here with exchangeable mean responses to each treatment between subgroups. The posterior probability of QI and the corresponding Bayes factor are proposed as a diagnostic and as a test statistic. The model is motivated by two multi-center trials with binary responses. The frequentist power and size of the test using the Bayes factor are examined and compared with two other commonly used tests. The impact of imbalance between the sample sizes in each subgroup on power is examined, and the test based on the Bayes factor typically has better power for unbalanced designs, especially for small sample sizes. An exact test based on the Bayes factor is also suggested assuming the hierarchical model. The Bayes factor provides a concise summary of the evidence for or against QI. It is shown by example that it is easily adapted to summarize the evidence for 'clinically meaningful QI,' defined as the simple effects being of opposite signs and larger in absolute value than a minimal clinically meaningful effect.
Walsh, Stephen J.; Whitney, Paul D.
2012-12-14
Bayesian networks have attained widespread use in data analysis and decision making. Well studied topics include: efficient inference, evidence propagation, parameter learning from data for complete and incomplete data scenarios, expert elicitation for calibrating Bayesian network probabilities, and structure learning. It is not uncommon for the researcher to assume the structure of the Bayesian network or to glean the structure from expert elicitation or domain knowledge. In this scenario, the model may be calibrated through learning the parameters from relevant data. There is a lack of work on model diagnostics for fitted Bayesian networks; this is the contribution of this paper. We key on the definition of (conditional) independence to develop a graphical diagnostic method which indicates if the conditional independence assumptions imposed when one assumes the structure of the Bayesian network are supported by the data. We develop the approach theoretically and describe a Monte Carlo method to generate uncertainty measures for the consistency of the data with conditional independence assumptions under the model structure. We describe how this theoretical information and the data are presented in a graphical diagnostic tool. We demonstrate the approach through data simulated from Bayesian networks under different conditional independence assumptions. We also apply the diagnostic to a real world data set. The results indicate that our approach is a reasonable way of visualizing and inspecting the conditional independence assumption of a Bayesian network given data.
Model parameter updating using Bayesian networks
Treml, C. A.; Ross, Timothy J.
2004-01-01
This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.
An evolutionary based Bayesian design optimization approach under incomplete information
NASA Astrophysics Data System (ADS)
Srivastava, Rupesh; Deb, Kalyanmoy
2013-02-01
Design optimization in the absence of complete information about uncertain quantities has been recently gaining consideration, as expensive repetitive computation tasks are becoming tractable due to the invention of faster and parallel computers. This work uses Bayesian inference to quantify design reliability when only sample measurements of the uncertain quantities are available. A generalized Bayesian reliability based design optimization algorithm has been proposed and implemented for numerical as well as engineering design problems. The approach uses an evolutionary algorithm (EA) to obtain a trade-off front between design objectives and reliability. The Bayesian approach provides a well-defined link between the amount of available information and the reliability through a confidence measure, and the EA acts as an efficient optimizer for a discrete and multi-dimensional objective space. Additionally, a GPU-based parallelization study shows computational speed-up of close to 100 times in a simulated scenario wherein the constraint qualification checks may be time consuming and could render a sequential implementation that can be impractical for large sample sets. These results show promise for the use of a parallel implementation of EAs in handling design optimization problems under uncertainties.
Estimating anatomical trajectories with Bayesian mixed-effects modeling
Ziegler, G.; Penny, W.D.; Ridgway, G.R.; Ourselin, S.; Friston, K.J.
2015-01-01
We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). PMID:26190405
Estimating anatomical trajectories with Bayesian mixed-effects modeling.
Ziegler, G; Penny, W D; Ridgway, G R; Ourselin, S; Friston, K J
2015-11-01
We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD).
Estimating anatomical trajectories with Bayesian mixed-effects modeling.
Ziegler, G; Penny, W D; Ridgway, G R; Ourselin, S; Friston, K J
2015-11-01
We introduce a mass-univariate framework for the analysis of whole-brain structural trajectories using longitudinal Voxel-Based Morphometry data and Bayesian inference. Our approach to developmental and aging longitudinal studies characterizes heterogeneous structural growth/decline between and within groups. In particular, we propose a probabilistic generative model that parameterizes individual and ensemble average changes in brain structure using linear mixed-effects models of age and subject-specific covariates. Model inversion uses Expectation Maximization (EM), while voxelwise (empirical) priors on the size of individual differences are estimated from the data. Bayesian inference on individual and group trajectories is realized using Posterior Probability Maps (PPM). In addition to parameter inference, the framework affords comparisons of models with varying combinations of model order for fixed and random effects using model evidence. We validate the model in simulations and real MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. We further demonstrate how subject specific characteristics contribute to individual differences in longitudinal volume changes in healthy subjects, Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). PMID:26190405
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Bayesian Models for fMRI Data Analysis
Zhang, Linlin; Guindani, Michele; Vannucci, Marina
2015-01-01
Functional magnetic resonance imaging (fMRI), a noninvasive neuroimaging method that provides an indirect measure of neuronal activity by detecting blood flow changes, has experienced an explosive growth in the past years. Statistical methods play a crucial role in understanding and analyzing fMRI data. Bayesian approaches, in particular, have shown great promise in applications. A remarkable feature of fully Bayesian approaches is that they allow a flexible modeling of spatial and temporal correlations in the data. This paper provides a review of the most relevant models developed in recent years. We divide methods according to the objective of the analysis. We start from spatio-temporal models for fMRI data that detect task-related activation patterns. We then address the very important problem of estimating brain connectivity. We also touch upon methods that focus on making predictions of an individual's brain activity or a clinical or behavioral response. We conclude with a discussion of recent integrative models that aim at combining fMRI data with other imaging modalities, such as EEG/MEG and DTI data, measured on the same subjects. We also briefly discuss the emerging field of imaging genetics. PMID:25750690
NASA Astrophysics Data System (ADS)
Zhu, G. F.; Li, X.; Su, Y. H.; Zhang, K.; Bai, Y.; Ma, J. Z.; Li, C. B.; Hu, X. L.; He, J. H.
2014-01-01
Based on direct measurements of half-hourly canopy evapotranspiration (ET; W m-2) using the eddy covariance (EC) system and daily soil evaporation (E; mm d-1) using microlysimeters over a crop ecosystem in arid northwest China from 27 May to 14 September in 2013, a Bayesian method was used to simultaneously parameterize the soil surface and canopy resistances in the Shuttleworth-Wallace (S-W) model. The posterior distributions of the parameters in most cases were well updated by the multiple measuring dataset with relatively narrow high-probability intervals. There was a good agreement between measured and simulated values of half-hourly ET and daily E with a linear regression being y = 0.84x +0.18 (R2 = 0.83) and y = 1.01x + 0.01 (R2 = 0.82), respectively. The causes of underestimations of ET by the S-W model was mainly attributed to the micro-scale advection, which can contribute an added energy in the form of downward sensible heat fluxes to the ET process. Therefore, the advection process should be taken into accounted in simulating ET in heterogeneous land surface. Also, underestimations were observed on or shortly after rainy days due to direct evaporation of liquid water intercepted in the canopy. Thus, the canopy interception model should be coupled to the S-W model in the long-term ET simulation.
NASA Astrophysics Data System (ADS)
Zhu, G. F.; Li, X.; Su, Y. H.; Zhang, K.; Bai, Y.; Ma, J. Z.; Li, C. B.; Hu, X. L.; He, J. H.
2014-07-01
Based on direct measurements of half-hourly canopy evapotranspiration (ET; W m-2) using the eddy covariance (EC) system and daily soil evaporation (E; mm day-1) using microlysimeters over a crop ecosystem in arid northwestern China from 27 May to 14 September in 2013, a Bayesian method was used to simultaneously parameterize the soil surface and canopy resistances in the Shuttleworth-Wallace (S-W) model. Four of the six parameters showed relatively larger uncertainty reductions (> 50%), and their posterior distributions became approximately symmetric with distinctive modes. There was a moderately good agreement between measured and simulated values of half-hourly ET and daily E with a linear regression being y = 0.84 x + 0.18 (R2 = 0.83) and y = 1.01 x + 0.01 (R2 = 0.82), respectively. The causes of underestimations of ET by the S-W model was possibly attributed to the microscale advection, which can contribute an added energy in the form of downward sensible heat fluxes to the ET process. Therefore, the advection process should be taken into account in simulating ET in heterogeneous land surfaces. Also, underestimations were observed on or shortly after rainy days, which may be due to direct evaporation of liquid water intercepted in the canopy. Thus, the canopy interception model should be coupled to the S-W model in the long-term ET simulation.
A Bayesian network approach to the database search problem in criminal proceedings
2012-01-01
Background The ‘database search problem’, that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
Chiu, Weihsueh A.; Okino, Miles S.; Evans, Marina V.
2009-11-15
We have developed a comprehensive, Bayesian, PBPK model-based analysis of the population toxicokinetics of trichloroethylene (TCE) and its metabolites in mice, rats, and humans, considering a wider range of physiological, chemical, in vitro, and in vivo data than any previously published analysis of TCE. The toxicokinetics of the 'population average,' its population variability, and their uncertainties are characterized in an approach that strives to be maximally transparent and objective. Estimates of experimental variability and uncertainty were also included in this analysis. The experimental database was expanded to include virtually all available in vivo toxicokinetic data, which permitted, in rats and humans, the specification of separate datasets for model calibration and evaluation. The total combination of these approaches and PBPK analysis provides substantial support for the model predictions. In addition, we feel confident that the approach employed also yields an accurate characterization of the uncertainty in metabolic pathways for which available data were sparse or relatively indirect, such as GSH conjugation and respiratory tract metabolism. Key conclusions from the model predictions include the following: (1) as expected, TCE is substantially metabolized, primarily by oxidation at doses below saturation; (2) GSH conjugation and subsequent bioactivation in humans appear to be 10- to 100-fold greater than previously estimated; and (3) mice had the greatest rate of respiratory tract oxidative metabolism as compared to rats and humans. In a situation such as TCE in which there is large database of studies coupled with complex toxicokinetics, the Bayesian approach provides a systematic method of simultaneously estimating model parameters and characterizing their uncertainty and variability. However, care needs to be taken in its implementation to ensure biological consistency, transparency, and objectivity.
Quantum-Like Bayesian Networks for Modeling Decision Making
Moreira, Catarina; Wichert, Andreas
2016-01-01
In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios. PMID:26858669
Quantum-Like Bayesian Networks for Modeling Decision Making.
Moreira, Catarina; Wichert, Andreas
2016-01-01
In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios.
Uncovering Transcriptional Regulatory Networks by Sparse Bayesian Factor Model
NASA Astrophysics Data System (ADS)
Meng, Jia; Zhang, Jianqiu(Michelle); Qi, Yuan(Alan); Chen, Yidong; Huang, Yufei
2010-12-01
The problem of uncovering transcriptional regulation by transcription factors (TFs) based on microarray data is considered. A novel Bayesian sparse correlated rectified factor model (BSCRFM) is proposed that models the unknown TF protein level activity, the correlated regulations between TFs, and the sparse nature of TF-regulated genes. The model admits prior knowledge from existing database regarding TF-regulated target genes based on a sparse prior and through a developed Gibbs sampling algorithm, a context-specific transcriptional regulatory network specific to the experimental condition of the microarray data can be obtained. The proposed model and the Gibbs sampling algorithm were evaluated on the simulated systems, and results demonstrated the validity and effectiveness of the proposed approach. The proposed model was then applied to the breast cancer microarray data of patients with Estrogen Receptor positive ([InlineEquation not available: see fulltext.]) status and Estrogen Receptor negative ([InlineEquation not available: see fulltext.]) status, respectively.
Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.
2013-01-01
The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the speciﬁc data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overﬁtting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of ﬁt or estimate a balance between ﬁt and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.
Bayesian Case-deletion Model Complexity and Information Criterion
Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia
2015-01-01
We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example. PMID:26180578
Entropic Priors and Bayesian Model Selection
NASA Astrophysics Data System (ADS)
Brewer, Brendon J.; Francis, Matthew J.
2009-12-01
We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian ``Occam's Razor.'' This is illustrated with a simple example involving what Jaynes called a ``sure thing'' hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative ``sure thing'' hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst cosmologists: is dark energy a cosmological constant, or has it evolved with time in some way? And how shall we decide, when the data are in?
Bayesian analysis of a disability model for lung cancer survival.
Armero, C; Cabras, S; Castellanos, M E; Perra, S; Quirós, A; Oruezábal, M J; Sánchez-Rubio, J
2016-02-01
Bayesian reasoning, survival analysis and multi-state models are used to assess survival times for Stage IV non-small-cell lung cancer patients and the evolution of the disease over time. Bayesian estimation is done using minimum informative priors for the Weibull regression survival model, leading to an automatic inferential procedure. Markov chain Monte Carlo methods have been used for approximating posterior distributions and the Bayesian information criterion has been considered for covariate selection. In particular, the posterior distribution of the transition probabilities, resulting from the multi-state model, constitutes a very interesting tool which could be useful to help oncologists and patients make efficient and effective decisions.
Approximate Bayesian computation for forward modeling in cosmology
NASA Astrophysics Data System (ADS)
Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar
2015-08-01
Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.
A Bayesian experimental design approach to structural health monitoring
Farrar, Charles; Flynn, Eric; Todd, Michael
2010-01-01
Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.
Nonparametric Bayesian Modeling for Automated Database Schema Matching
Ferragut, Erik M; Laska, Jason A
2015-01-01
The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.
Bayesian Dose-Response Modeling in Sparse Data
NASA Astrophysics Data System (ADS)
Kim, Steven B.
This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a
Bayesian model reduction and empirical Bayes for group (DCM) studies.
Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter
2016-03-01
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction.
Bayesian model reduction and empirical Bayes for group (DCM) studies
Friston, Karl J.; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E.; van Wijk, Bernadette C.M.; Ziegler, Gabriel; Zeidman, Peter
2016-01-01
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level – e.g., dynamic causal models – and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. PMID:26569570
Calibrating Bayesian Network Representations of Social-Behavioral Models
Whitney, Paul D.; Walsh, Stephen J.
2010-04-08
While human behavior has long been studied, recent and ongoing advances in computational modeling present opportunities for recasting research outcomes in human behavior. In this paper we describe how Bayesian networks can represent outcomes of human behavior research. We demonstrate a Bayesian network that represents political radicalization research – and show a corresponding visual representation of aspects of this research outcome. Since Bayesian networks can be quantitatively compared with external observations, the representation can also be used for empirical assessments of the research which the network summarizes. For a political radicalization model based on published research, we show this empirical comparison with data taken from the Minorities at Risk Organizational Behaviors database.
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
Model Selection in Historical Research Using Approximate Bayesian Computation
Rubio-Campillo, Xavier
2016-01-01
Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953
Prospective evaluation of a Bayesian model to predict organizational change.
Molfenter, Todd; Gustafson, Dave; Kilo, Chuck; Bhattacharya, Abhik; Olsson, Jesper
2005-01-01
This research examines a subjective Bayesian model's ability to predict organizational change outcomes and sustainability of those outcomes for project teams participating in a multi-organizational improvement collaborative. PMID:16093893
Advances in Bayesian Model Based Clustering Using Particle Learning
Merl, D M
2009-11-19
Recent work by Carvalho, Johannes, Lopes and Polson and Carvalho, Lopes, Polson and Taddy introduced a sequential Monte Carlo (SMC) alternative to traditional iterative Monte Carlo strategies (e.g. MCMC and EM) for Bayesian inference for a large class of dynamic models. The basis of SMC techniques involves representing the underlying inference problem as one of state space estimation, thus giving way to inference via particle filtering. The key insight of Carvalho et al was to construct the sequence of filtering distributions so as to make use of the posterior predictive distribution of the observable, a distribution usually only accessible in certain Bayesian settings. Access to this distribution allows a reversal of the usual propagate and resample steps characteristic of many SMC methods, thereby alleviating to a large extent many problems associated with particle degeneration. Furthermore, Carvalho et al point out that for many conjugate models the posterior distribution of the static variables can be parametrized in terms of [recursively defined] sufficient statistics of the previously observed data. For models where such sufficient statistics exist, particle learning as it is being called, is especially well suited for the analysis of streaming data do to the relative invariance of its algorithmic complexity with the number of data observations. Through a particle learning approach, a statistical model can be fit to data as the data is arriving, allowing at any instant during the observation process direct quantification of uncertainty surrounding underlying model parameters. Here we describe the use of a particle learning approach for fitting a standard Bayesian semiparametric mixture model as described in Carvalho, Lopes, Polson and Taddy. In Section 2 we briefly review the previously presented particle learning algorithm for the case of a Dirichlet process mixture of multivariate normals. In Section 3 we describe several novel extensions to the original
Bayesian Approach for Reliability Assessment of Sunshield Deployment on JWST
NASA Technical Reports Server (NTRS)
Kaminskiy, Mark P.; Evans, John W.; Gallo, Luis D.
2013-01-01
Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications, for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a Bayesian approach for reliability estimation of spacecraft deployment was developed for this purpose. This approach was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the observatory's telescope and science instruments. In order to collect the prior information on deployable systems, detailed studies of "heritage information", were conducted extending over 45 years of spacecraft launches. The NASA Goddard Space Flight Center (GSFC) Spacecraft Operational Anomaly and Reporting System (SOARS) data were then used to estimate the parameters of the conjugative beta prior distribution for anomaly and failure occurrence, as the most consistent set of available data and that could be matched to launch histories. This allows for an emperical Bayesian prediction for the risk of an anomaly occurrence of the complex Sunshield deployment, with credibility limits, using prior deployment data and test information.
Modeling hypoxia in the Chesapeake Bay: Ensemble estimation using a Bayesian hierarchical model
NASA Astrophysics Data System (ADS)
Stow, Craig A.; Scavia, Donald
2009-02-01
Quantifying parameter and prediction uncertainty in a rigorous framework can be an important component of model skill assessment. Generally, models with lower uncertainty will be more useful for prediction and inference than models with higher uncertainty. Ensemble estimation, an idea with deep roots in the Bayesian literature, can be useful to reduce model uncertainty. It is based on the idea that simultaneously estimating common or similar parameters among models can result in more precise estimates. We demonstrate this approach using the Streeter-Phelps dissolved oxygen sag model fit to 29 years of data from Chesapeake Bay. Chesapeake Bay has a long history of bottom water hypoxia and several models are being used to assist management decision-making in this system. The Bayesian framework is particularly useful in a decision context because it can combine both expert-judgment and rigorous parameter estimation to yield model forecasts and a probabilistic estimate of the forecast uncertainty.
A Semiparametric Bayesian Model for Detecting Synchrony Among Multiple Neurons
Shahbaba, Babak; Zhou, Bo; Lan, Shiwei; Ombao, Hernando; Moorman, David; Behseta, Sam
2015-01-01
We propose a scalable semiparametric Bayesian model to capture dependencies among multiple neurons by detecting their co-firing (possibly with some lag time) patterns over time. After discretizing time so there is at most one spike at each interval, the resulting sequence of 1’s (spike) and 0’s (silence) for each neuron is modeled using the logistic function of a continuous latent variable with a Gaussian process prior. For multiple neurons, the corresponding marginal distributions are coupled to their joint probability distribution using a parametric copula model. The advantages of our approach are as follows: the nonparametric component (i.e., the Gaussian process model) provides a flexible framework for modeling the underlying firing rates; the parametric component (i.e., the copula model) allows us to make inference regarding both contemporaneous and lagged relationships among neurons; using the copula model, we construct multivariate probabilistic models by separating the modeling of univariate marginal distributions from the modeling of dependence structure among variables; our method is easy to implement using a computationally efficient sampling algorithm that can be easily extended to high dimensional problems. Using simulated data, we show that our approach could correctly capture temporal dependencies in firing rates and identify synchronous neurons. We also apply our model to spike train data obtained from prefrontal cortical areas. PMID:24922500
A probabilistic approach to quantum Bayesian games of incomplete information
NASA Astrophysics Data System (ADS)
Iqbal, Azhar; Chappell, James M.; Li, Qiang; Pearce, Charles E. M.; Abbott, Derek
2014-12-01
A Bayesian game is a game of incomplete information in which the rules of the game are not fully known to all players. We consider the Bayesian game of Battle of Sexes that has several Bayesian Nash equilibria and investigate its outcome when the underlying probability set is obtained from generalized Einstein-Podolsky-Rosen experiments. We find that this probability set, which may become non-factorizable, results in a unique Bayesian Nash equilibrium of the game.
Continuous event monitoring via a Bayesian predictive approach.
Di, Jianing; Wang, Daniel; Brashear, H Robert; Dragalin, Vladimir; Krams, Michael
2016-01-01
In clinical trials, continuous monitoring of event incidence rate plays a critical role in making timely decisions affecting trial outcome. For example, continuous monitoring of adverse events protects the safety of trial participants, while continuous monitoring of efficacy events helps identify early signals of efficacy or futility. Because the endpoint of interest is often the event incidence associated with a given length of treatment duration (e.g., incidence proportion of an adverse event with 2 years of dosing), assessing the event proportion before reaching the intended treatment duration becomes challenging, especially when the event onset profile evolves over time with accumulated exposure. In particular, in the earlier part of the study, ignoring censored subjects may result in significant bias in estimating the cumulative event incidence rate. Such a problem is addressed using a predictive approach in the Bayesian framework. In the proposed approach, experts' prior knowledge about both the frequency and timing of the event occurrence is combined with observed data. More specifically, during any interim look, each event-free subject will be counted with a probability that is derived using prior knowledge. The proposed approach is particularly useful in early stage studies for signal detection based on limited information. But it can also be used as a tool for safety monitoring (e.g., data monitoring committee) during later stage trials. Application of the approach is illustrated using a case study where the incidence rate of an adverse event is continuously monitored during an Alzheimer's disease clinical trial. The performance of the proposed approach is also assessed and compared with other Bayesian and frequentist methods via simulation.
Advanced REACH Tool: a Bayesian model for occupational exposure assessment.
McNally, Kevin; Warren, Nicholas; Fransman, Wouter; Entink, Rinke Klein; Schinkel, Jody; van Tongeren, Martie; Cherrie, John W; Kromhout, Hans; Schneider, Thomas; Tielemans, Erik
2014-06-01
This paper describes a Bayesian model for the assessment of inhalation exposures in an occupational setting; the methodology underpins a freely available web-based application for exposure assessment, the Advanced REACH Tool (ART). The ART is a higher tier exposure tool that combines disparate sources of information within a Bayesian statistical framework. The information is obtained from expert knowledge expressed in a calibrated mechanistic model of exposure assessment, data on inter- and intra-individual variability in exposures from the literature, and context-specific exposure measurements. The ART provides central estimates and credible intervals for different percentiles of the exposure distribution, for full-shift and long-term average exposures. The ART can produce exposure estimates in the absence of measurements, but the precision of the estimates improves as more data become available. The methodology presented in this paper is able to utilize partially analogous data, a novel approach designed to make efficient use of a sparsely populated measurement database although some additional research is still required before practical implementation. The methodology is demonstrated using two worked examples: an exposure to copper pyrithione in the spraying of antifouling paints and an exposure to ethyl acetate in shoe repair. PMID:24665110
Bayesian Structural Equation Modeling: A More Flexible Representation of Substantive Theory
ERIC Educational Resources Information Center
Muthen, Bengt; Asparouhov, Tihomir
2012-01-01
This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed…
Using Bayesian Networks to Model Hierarchical Relationships in Epidemiological Studies
2011-01-01
OBJECTIVES To propose an alternative procedure, based on a Bayesian network (BN), for estimation and prediction, and to discuss its usefulness for taking into account the hierarchical relationships among covariates. METHODS The procedure is illustrated by modeling the risk of diarrhea infection for 2,740 children aged 0 to 59 months in Cameroon. We compare the procedure with a standard logistic regression and with a model based on multi-level logistic regression. RESULTS The standard logistic regression approach is inadequate, or at least incomplete, in that it does not attempt to account for potentially causal relationships between risk factors. The multi-level logistic regression does model the hierarchical structure, but does so in a piecewise manner; the resulting estimates and interpretations differ from those of the BN approach proposed here. An advantage of the BN approach is that it enables one to determine the probability that a risk factor (and/or the outcome) is in any specific state, given the states of the others. The currently available approaches can only predict the outcome (disease), given the states of the covariates. CONCLUSION A major advantage of BNs is that they can deal with more complex interrelationships between variables whereas competing approaches deal at best only with hierarchical ones. We propose that BN be considered as well as a worthwhile method for summarizing the data in epidemiological studies whose aim is understanding the determinants of diseases and quantifying their effects. PMID:21779534
Carabin, Hélène; Escalona, Marisela; Marshall, Clare; Vivas-Martínez, Sarai; Botto, Carlos; Joseph, Lawrence; Basáñez, María-Gloria
2003-01-01
OBJECTIVE: To develop a Bayesian hierarchical model for human onchocerciasis with which to explore the factors that influence prevalence of microfilariae in the Amazonian focus of onchocerciasis and predict the probability of any community being at least mesoendemic (>20% prevalence of microfilariae), and thus in need of priority ivermectin treatment. METHODS: Models were developed with data from 732 individuals aged > or =15 years who lived in 29 Yanomami communities along four rivers of the south Venezuelan Orinoco basin. The models' abilities to predict prevalences of microfilariae in communities were compared. The deviance information criterion, Bayesian P-values, and residual values were used to select the best model with an approximate cross-validation procedure. FINDINGS: A three-level model that acknowledged clustering of infection within communities performed best, with host age and sex included at the individual level, a river-dependent altitude effect at the community level, and additional clustering of communities along rivers. This model correctly classified 25/29 (86%) villages with respect to their need for priority ivermectin treatment. CONCLUSION: Bayesian methods are a flexible and useful approach for public health research and control planning. Our model acknowledges the clustering of infection within communities, allows investigation of links between individual- or community-specific characteristics and infection, incorporates additional uncertainty due to missing covariate data, and informs policy decisions by predicting the probability that a new community is at least mesoendemic. PMID:12973640
Textual and visual content-based anti-phishing: a Bayesian approach.
Zhang, Haijun; Liu, Gang; Chow, Tommy W S; Liu, Wenyin
2011-10-01
A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases.
Integrated Bayesian network framework for modeling complex ecological issues.
Johnson, Sandra; Mengersen, Kerrie
2012-07-01
The management of environmental problems is multifaceted, requiring varied and sometimes conflicting objectives and perspectives to be considered. Bayesian network (BN) modeling facilitates the integration of information from diverse sources and is well suited to tackling the management challenges of complex environmental problems. However, combining several perspectives in one model can lead to large, unwieldy BNs that are difficult to maintain and understand. Conversely, an oversimplified model may lead to an unrealistic representation of the environmental problem. Environmental managers require the current research and available knowledge about an environmental problem of interest to be consolidated in a meaningful way, thereby enabling the assessment of potential impacts and different courses of action. Previous investigations of the environmental problem of interest may have already resulted in the construction of several disparate ecological models. On the other hand, the opportunity may exist to initiate this modeling. In the first instance, the challenge is to integrate existing models and to merge the information and perspectives from these models. In the second instance, the challenge is to include different aspects of the environmental problem incorporating both the scientific and management requirements. Although the paths leading to the combined model may differ for these 2 situations, the common objective is to design an integrated model that captures the available information and research, yet is simple to maintain, expand, and refine. BN modeling is typically an iterative process, and we describe a heuristic method, the iterative Bayesian network development cycle (IBNDC), for the development of integrated BN models that are suitable for both situations outlined above. The IBNDC approach facilitates object-oriented BN (OOBN) modeling, arguably viewed as the next logical step in adaptive management modeling, and that embraces iterative development
Bridging the gap between GLUE and formal statistical approaches: approximate Bayesian computation
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2013-12-01
In recent years, a strong debate has emerged in the hydrologic literature regarding how to properly treat nontraditional error residual distributions and quantify parameter and predictive uncertainty. Particularly, there is strong disagreement whether such uncertainty framework should have its roots within a proper statistical (Bayesian) context using Markov chain Monte Carlo (MCMC) simulation techniques, or whether such a framework should be based on a quite different philosophy and implement informal likelihood functions and simplistic search methods to summarize parameter and predictive distributions. This paper is a follow-up of our previous work published in Vrugt and Sadegh (2013) and demonstrates that approximate Bayesian computation (ABC) bridges the gap between formal and informal statistical model-data fitting approaches. The ABC methodology has recently emerged in the fields of biology and population genetics and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics that measure the distance of each model simulation to the data. This paper further studies the theoretical and numerical equivalence of formal and informal Bayesian approaches using discharge and forcing data from different watersheds in the United States, in particular generalized likelihood uncertainty estimation (GLUE). We demonstrate that the limits of acceptability approach of GLUE is a special variant of ABC if each discharge observation of the calibration data set is used as a summary diagnostic.
A Bayesian Approach to Real-Time Earthquake Phase Association
NASA Astrophysics Data System (ADS)
Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.
2014-12-01
Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.
Parameterizing Bayesian network Representations of Social-Behavioral Models by Expert Elicitation
Walsh, Stephen J.; Dalton, Angela C.; Whitney, Paul D.; White, Amanda M.
2010-05-23
Bayesian networks provide a general framework with which to model many natural phenomena. The mathematical nature of Bayesian networks enables a plethora of model validation and calibration techniques: e.g parameter estimation, goodness of fit tests, and diagnostic checking of the model assumptions. However, they are not free of shortcomings. Parameter estimation from relevant extant data is a common approach to calibrating the model parameters. In practice it is not uncommon to find oneself lacking adequate data to reliably estimate all model parameters. In this paper we present the early development of a novel application of conjoint analysis as a method for eliciting and modeling expert opinions and using the results in a methodology for calibrating the parameters of a Bayesian network.
Placek, Ben; Knuth, Kevin H.; Angerhausen, Daniel E-mail: kknuth@albany.edu
2014-11-10
EXONEST is an algorithm dedicated to detecting and characterizing the photometric signatures of exoplanets, which include reflection and thermal emission, Doppler boosting, and ellipsoidal variations. Using Bayesian inference, we can test between competing models that describe the data as well as estimate model parameters. We demonstrate this approach by testing circular versus eccentric planetary orbital models, as well as testing for the presence or absence of four photometric effects. In addition to using Bayesian model selection, a unique aspect of EXONEST is the potential capability to distinguish between reflective and thermal contributions to the light curve. A case study is presented using Kepler data recorded from the transiting planet KOI-13b. By considering only the nontransiting portions of the light curve, we demonstrate that it is possible to estimate the photometrically relevant model parameters of KOI-13b. Furthermore, Bayesian model testing confirms that the orbit of KOI-13b has a detectable eccentricity.
A Bayesian approach to linear regression in astronomy
NASA Astrophysics Data System (ADS)
Sereno, Mauro
2016-01-01
Linear regression is common in astronomical analyses. I discuss a Bayesian hierarchical modelling of data with heteroscedastic and possibly correlated measurement errors and intrinsic scatter. The method fully accounts for time evolution. The slope, the normalization, and the intrinsic scatter of the relation can evolve with the redshift. The intrinsic distribution of the independent variable is approximated using a mixture of Gaussian distributions whose means and standard deviations depend on time. The method can address scatter in the measured independent variable (a kind of Eddington bias), selection effects in the response variable (Malmquist bias), and departure from linearity in form of a knee. I tested the method with toy models and simulations and quantified the effect of biases and inefficient modelling. The R-package LIRA (LInear Regression in Astronomy) is made available to perform the regression.
Modeling the Climatology of Tornado Occurrence with Bayesian Inference
NASA Astrophysics Data System (ADS)
Cheng, Vincent Y. S.
-related variables are more uniform across seasons. The residual variability of the same modeling framework (a reflection of the fidelity of the statistical formulation considered) is subsequently used to delineate distinct geographical patterns of tornado activity. This piece of information provides the foundation for the Bayesian hierarchical prognostic model presented in the third chapter of my dissertation. The results of the latter approach reinforce my earlier finding that the spatial variability of the annual and warm seasonal tornado occurrence is well explained by convective available potential energy and storm relative helicity alone, while vertical wind shear is better at reproducing the cool season tornado activity. The Bayesian hierarchical modeling framework offers a promising methodological tool for understanding regional tornado environments and obtaining reliable predictions in North America.
Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory
ERIC Educational Resources Information Center
Gopnik, Alison; Wellman, Henry M.
2012-01-01
We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…
ERIC Educational Resources Information Center
Story, Roger E.
1996-01-01
Discussion of the use of Latent Semantic Indexing to determine relevancy in information retrieval focuses on statistical regression and Bayesian methods. Topics include keyword searching; a multiple regression model; how the regression model can aid search methods; and limitations of this approach, including complexity, linearity, and…
Understanding the formation and evolution of interstellar ices: a Bayesian approach
Makrymallis, Antonios; Viti, Serena
2014-10-10
Understanding the physical conditions of dark molecular clouds and star-forming regions is an inverse problem subject to complicated chemistry that varies nonlinearly with both time and the physical environment. In this paper, we apply a Bayesian approach based on a Markov chain Monte Carlo (MCMC) method for solving the nonlinear inverse problems encountered in astrochemical modeling. We use observations for ice and gas species in dark molecular clouds and a time-dependent, gas-grain chemical model to infer the values of the physical and chemical parameters that characterize quiescent regions of molecular clouds. We show evidence that in high-dimensional problems, MCMC algorithms provide a more efficient and complete solution than more classical strategies. The results of our MCMC method enable us to derive statistical estimates and uncertainties for the physical parameters of interest as a result of the Bayesian treatment.
Tang, An-Min; Tang, Nian-Sheng
2015-02-28
We propose a semiparametric multivariate skew-normal joint model for multivariate longitudinal and multivariate survival data. One main feature of the posited model is that we relax the commonly used normality assumption for random effects and within-subject error by using a centered Dirichlet process prior to specify the random effects distribution and using a multivariate skew-normal distribution to specify the within-subject error distribution and model trajectory functions of longitudinal responses semiparametrically. A Bayesian approach is proposed to simultaneously obtain Bayesian estimates of unknown parameters, random effects and nonparametric functions by combining the Gibbs sampler and the Metropolis-Hastings algorithm. Particularly, a Bayesian local influence approach is developed to assess the effect of minor perturbations to within-subject measurement error and random effects. Several simulation studies and an example are presented to illustrate the proposed methodologies. PMID:25404574
Lee, Sik-Yum; Song, Xin-Yuan
2004-05-01
Missing data are very common in behavioural and psychological research. In this paper, we develop a Bayesian approach in the context of a general nonlinear structural equation model with missing continuous and ordinal categorical data. In the development, the missing data are treated as latent quantities, and provision for the incompleteness of the data is made by a hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm. We show by means of a simulation study that the Bayesian estimates are accurate. A Bayesian model comparison procedure based on the Bayes factor and path sampling is proposed. The required observations from the posterior distribution for computing the Bayes factor are simulated by the hybrid algorithm in Bayesian estimation. Our simulation results indicate that the correct model is selected more frequently when the incomplete records are used in the analysis than when they are ignored. The methodology is further illustrated with a real data set from a study concerned with an AIDS preventative intervention for Filipina sex workers.
A Bayesian modelling framework for tornado occurrences in North America.
Cheng, Vincent Y S; Arhonditsis, George B; Sills, David M L; Gough, William A; Auld, Heather
2015-01-01
Tornadoes represent one of nature's most hazardous phenomena that have been responsible for significant destruction and devastating fatalities. Here we present a Bayesian modelling approach for elucidating the spatiotemporal patterns of tornado activity in North America. Our analysis shows a significant increase in the Canadian Prairies and the Northern Great Plains during the summer, indicating a clear transition of tornado activity from the United States to Canada. The linkage between monthly-averaged atmospheric variables and likelihood of tornado events is characterized by distinct seasonality; the convective available potential energy is the predominant factor in the summer; vertical wind shear appears to have a strong signature primarily in the winter and secondarily in the summer; and storm relative environmental helicity is most influential in the spring. The present probabilistic mapping can be used to draw inference on the likelihood of tornado occurrence in any location in North America within a selected time period of the year.
A Bayesian modelling framework for tornado occurrences in North America.
Cheng, Vincent Y S; Arhonditsis, George B; Sills, David M L; Gough, William A; Auld, Heather
2015-01-01
Tornadoes represent one of nature's most hazardous phenomena that have been responsible for significant destruction and devastating fatalities. Here we present a Bayesian modelling approach for elucidating the spatiotemporal patterns of tornado activity in North America. Our analysis shows a significant increase in the Canadian Prairies and the Northern Great Plains during the summer, indicating a clear transition of tornado activity from the United States to Canada. The linkage between monthly-averaged atmospheric variables and likelihood of tornado events is characterized by distinct seasonality; the convective available potential energy is the predominant factor in the summer; vertical wind shear appears to have a strong signature primarily in the winter and secondarily in the summer; and storm relative environmental helicity is most influential in the spring. The present probabilistic mapping can be used to draw inference on the likelihood of tornado occurrence in any location in North America within a selected time period of the year. PMID:25807465
Evaluating impacts using a BACI design, ratios, and a Bayesian approach with a focus on restoration.
Conner, Mary M; Saunders, W Carl; Bouwes, Nicolaas; Jordan, Chris
2015-10-01
Before-after-control-impact (BACI) designs are an effective method to evaluate natural and human-induced perturbations on ecological variables when treatment sites cannot be randomly chosen. While effect sizes of interest can be tested with frequentist methods, using Bayesian Markov chain Monte Carlo (MCMC) sampling methods, probabilities of effect sizes, such as a ≥20 % increase in density after restoration, can be directly estimated. Although BACI and Bayesian methods are used widely for assessing natural and human-induced impacts for field experiments, the application of hierarchal Bayesian modeling with MCMC sampling to BACI designs is less common. Here, we combine these approaches and extend the typical presentation of results with an easy to interpret ratio, which provides an answer to the main study question-"How much impact did a management action or natural perturbation have?" As an example of this approach, we evaluate the impact of a restoration project, which implemented beaver dam analogs, on survival and density of juvenile steelhead. Results indicated the probabilities of a ≥30 % increase were high for survival and density after the dams were installed, 0.88 and 0.99, respectively, while probabilities for a higher increase of ≥50 % were variable, 0.17 and 0.82, respectively. This approach demonstrates a useful extension of Bayesian methods that can easily be generalized to other study designs from simple (e.g., single factor ANOVA, paired t test) to more complicated block designs (e.g., crossover, split-plot). This approach is valuable for estimating the probabilities of restoration impacts or other management actions. PMID:27613291
Evaluating impacts using a BACI design, ratios, and a Bayesian approach with a focus on restoration.
Conner, Mary M; Saunders, W Carl; Bouwes, Nicolaas; Jordan, Chris
2015-10-01
Before-after-control-impact (BACI) designs are an effective method to evaluate natural and human-induced perturbations on ecological variables when treatment sites cannot be randomly chosen. While effect sizes of interest can be tested with frequentist methods, using Bayesian Markov chain Monte Carlo (MCMC) sampling methods, probabilities of effect sizes, such as a ≥20 % increase in density after restoration, can be directly estimated. Although BACI and Bayesian methods are used widely for assessing natural and human-induced impacts for field experiments, the application of hierarchal Bayesian modeling with MCMC sampling to BACI designs is less common. Here, we combine these approaches and extend the typical presentation of results with an easy to interpret ratio, which provides an answer to the main study question-"How much impact did a management action or natural perturbation have?" As an example of this approach, we evaluate the impact of a restoration project, which implemented beaver dam analogs, on survival and density of juvenile steelhead. Results indicated the probabilities of a ≥30 % increase were high for survival and density after the dams were installed, 0.88 and 0.99, respectively, while probabilities for a higher increase of ≥50 % were variable, 0.17 and 0.82, respectively. This approach demonstrates a useful extension of Bayesian methods that can easily be generalized to other study designs from simple (e.g., single factor ANOVA, paired t test) to more complicated block designs (e.g., crossover, split-plot). This approach is valuable for estimating the probabilities of restoration impacts or other management actions.
A Bayesian Nonparametric Approach to Image Super-Resolution.
Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid
2015-02-01
Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.
D. L. Kelly
2007-06-01
Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.
Bayesian Estimation and Uncertainty Quantification in Differential Equation Models
NASA Astrophysics Data System (ADS)
Bhaumik, Prithwish
In engineering, physics, biomedical sciences, pharmacokinetics and pharmacodynamics (PKPD) and many other fields the regression function is often specified as solution of a system of ordinary differential equations (ODEs) given by. dƒtheta(t) / dt = F(t), ƒtheta(, t),theta), t ∈ [0, 1]; here F is a known appropriately smooth vector valued function. Our interest lies in estimating theta from the noisy data. A two-step approach to solve this problem consists of the first step fitting the data nonparametrically, and the second step estimating the parameter by minimizing the distance between the nonparametrically estimated derivative and the derivative suggested by the system of ODEs. In Chapter 2 we consider a Bayesian analog of the two step approach by putting a finite random series prior on the regression function using B-spline basis. We establish a Bernstein-von Mises theorem for the posterior distribution of the parameter of interest induced from that on the regression function with the n --1/2 contraction rate. Although this approach is computationally fast, the Bayes estimator is not asymptotically efficient. This can be remedied by directly considering the distance between the function in the nonparametric model and a Runge-Kutta (RK4) approximate solution of the ODE while inducing the posterior distribution on the parameter as done in Chapter 3. We also study the asymptotic properties of a direct Bayesian method obtained from the approximate likelihood obtained by the RK4 method in Chapter 3. Chapters 4 and 5 contain the extensions of the methods discussed so far for higher order ODE's and partial differential equations (PDE's) respectively. We have mentioned the scopes of some future works in Chapter 6.
A Flexible Bayesian Model for Testing for Transmission Ratio Distortion
Casellas, Joaquim; Manunza, Arianna; Mercader, Anna; Quintanilla, Raquel; Amills, Marcel
2014-01-01
Current statistical approaches to investigate the nature and magnitude of transmission ratio distortion (TRD) are scarce and restricted to the most common experimental designs such as F2 populations and backcrosses. In this article, we describe a new Bayesian approach to check TRD within a given biallelic genetic marker in a diploid species, providing a highly flexible framework that can accommodate any kind of population structure. This model relies on the genotype of each offspring and thus integrates all available information from either the parents’ genotypes or population-specific allele frequencies and yields TRD estimates that can be corroborated by the calculation of a Bayes factor (BF). This approach has been evaluated on simulated data sets with appealing statistical performance. As a proof of concept, we have also tested TRD in a porcine population with five half-sib families and 352 offspring. All boars and piglets were genotyped with the Porcine SNP60 BeadChip, whereas genotypes from the sows were not available. The SNP-by-SNP screening of the pig genome revealed 84 SNPs with decisive evidences of TRD (BF > 100) after accounting for multiple testing. Many of these regions contained genes related to biological processes (e.g., nucleosome assembly and co-organization, DNA conformation and packaging, and DNA complex assembly) that are critically associated with embryonic viability. The implementation of this method, which overcomes many of the limitations of previous approaches, should contribute to fostering research on TRD in both model and nonmodel organisms. PMID:25271302
Modelling blood-brain barrier partitioning using Bayesian neural nets.
Winkler, David A; Burden, Frank R
2004-07-01
We have employed three families of molecular molecular descriptors, together with Bayesian regularized neural nets, to model the partitioning of a diverse range of drugs and other small molecules across the blood-brain barrier (BBB). The relative efficacy of each descriptors class is compared, and the advantages of flexible, parsimonious, model free mapping methods, like Bayesian neural nets, illustrated. The relative importance of the molecular descriptors for the most predictive BBB model were determined by use of automatic relevance determination (ARD), and compared with the important descriptors from other literature models of BBB partitioning.
Bayesian model selection applied to artificial neural networks used for water resources modeling
NASA Astrophysics Data System (ADS)
Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.
2008-04-01
Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.
Bayesian latent structure models with space-time-dependent covariates.
Cai, Bo; Lawson, Andrew B; Hossain, Md Monir; Choi, Jungsoon
2012-04-01
Spatial-temporal data requires flexible regression models which can model the dependence of responses on space- and time-dependent covariates. In this paper, we describe a semiparametric space-time model from a Bayesian perspective. Nonlinear time dependence of covariates and the interactions among the covariates are constructed by local linear and piecewise linear models, allowing for more flexible orientation and position of the covariate plane by using time-varying basis functions. Space-varying covariate linkage coefficients are also incorporated to allow for the variation of space structures across the geographical location. The formulation accommodates uncertainty in the number and locations of the piecewise basis functions to characterize the global effects, spatially structured and unstructured random effects in relation to covariates. The proposed approach relies on variable selection-type mixture priors for uncertainty in the number and locations of basis functions and in the space-varying linkage coefficients. A simulation example is presented to evaluate the performance of the proposed approach with the competing models. A real data example is used for illustration.
A Bayesian Semiparametric Model for Radiation Dose-Response Estimation.
Furukawa, Kyoji; Misumi, Munechika; Cologne, John B; Cullings, Harry M
2016-06-01
In evaluating the risk of exposure to health hazards, characterizing the dose-response relationship and estimating acceptable exposure levels are the primary goals. In analyses of health risks associated with exposure to ionizing radiation, while there is a clear agreement that moderate to high radiation doses cause harmful effects in humans, little has been known about the possible biological effects at low doses, for example, below 0.1 Gy, which is the dose range relevant to most radiation exposures of concern today. A conventional approach to radiation dose-response estimation based on simple parametric forms, such as the linear nonthreshold model, can be misleading in evaluating the risk and, in particular, its uncertainty at low doses. As an alternative approach, we consider a Bayesian semiparametric model that has a connected piece-wise-linear dose-response function with prior distributions having an autoregressive structure among the random slope coefficients defined over closely spaced dose categories. With a simulation study and application to analysis of cancer incidence data among Japanese atomic bomb survivors, we show that this approach can produce smooth and flexible dose-response estimation while reasonably handling the risk uncertainty at low doses and elsewhere. With relatively few assumptions and modeling options to be made by the analyst, the method can be particularly useful in assessing risks associated with low-dose radiation exposures. PMID:26581473
The Appeal to Expert Opinion: Quantitative Support for a Bayesian Network Approach.
Harris, Adam J L; Hahn, Ulrike; Madsen, Jens K; Hsu, Anne S
2016-08-01
The appeal to expert opinion is an argument form that uses the verdict of an expert to support a position or hypothesis. A previous scheme-based treatment of the argument form is formalized within a Bayesian network that is able to capture the critical aspects of the argument form, including the central considerations of the expert's expertise and trustworthiness. We propose this as an appropriate normative framework for the argument form, enabling the development and testing of quantitative predictions as to how people evaluate this argument, suggesting that such an approach might be beneficial to argumentation research generally. We subsequently present two experiments as an example of the potential for future research in this vein, demonstrating that participants' quantitative ratings of the convincingness of a proposition that has been supported with an appeal to expert opinion were broadly consistent with the predictions of the Bayesian model.
On the Bayesian Nonparametric Generalization of IRT-Type Models
ERIC Educational Resources Information Center
San Martin, Ernesto; Jara, Alejandro; Rolin, Jean-Marie; Mouchart, Michel
2011-01-01
We study the identification and consistency of Bayesian semiparametric IRT-type models, where the uncertainty on the abilities' distribution is modeled using a prior distribution on the space of probability measures. We show that for the semiparametric Rasch Poisson counts model, simple restrictions ensure the identification of a general…
Bayesian Network Models for Local Dependence among Observable Outcome Variables
ERIC Educational Resources Information Center
Almond, Russell G.; Mulder, Joris; Hemat, Lisa A.; Yan, Duanli
2009-01-01
Bayesian network models offer a large degree of flexibility for modeling dependence among observables (item outcome variables) from the same task, which may be dependent. This article explores four design patterns for modeling locally dependent observations: (a) no context--ignores dependence among observables; (b) compensatory context--introduces…
Semiparametric Thurstonian Models for Recurrent Choices: A Bayesian Analysis
ERIC Educational Resources Information Center
Ansari, Asim; Iyengar, Raghuram
2006-01-01
We develop semiparametric Bayesian Thurstonian models for analyzing repeated choice decisions involving multinomial, multivariate binary or multivariate ordinal data. Our modeling framework has multiple components that together yield considerable flexibility in modeling preference utilities, cross-sectional heterogeneity and parameter-driven…
On the Adequacy of Bayesian Evaluations of Categorization Models: Reply to Vanpaemel and Lee (2012)
ERIC Educational Resources Information Center
Wills, Andy J.; Pothos, Emmanuel M.
2012-01-01
Vanpaemel and Lee (2012) argued, and we agree, that the comparison of formal models can be facilitated by Bayesian methods. However, Bayesian methods neither precede nor supplant our proposals (Wills & Pothos, 2012), as Bayesian methods can be applied both to our proposals and to their polar opposites. Furthermore, the use of Bayesian methods to…
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Toward diagnostic model calibration and evaluation: Approximate Bayesian computation
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.; Sadegh, Mojtaba
2013-07-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.
Bridging groundwater models and decision support with a Bayesian network
Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert
2013-01-01
Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D.
2016-01-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. PMID:26993062
A Hierarchical Bayesian Approach to Ecological Count Data: A Flexible Tool for Ecologists
Fordyce, James A.; Gompert, Zachariah; Forister, Matthew L.; Nice, Chris C.
2011-01-01
Many ecological studies use the analysis of count data to arrive at biologically meaningful inferences. Here, we introduce a hierarchical Bayesian approach to count data. This approach has the advantage over traditional approaches in that it directly estimates the parameters of interest at both the individual-level and population-level, appropriately models uncertainty, and allows for comparisons among models, including those that exceed the complexity of many traditional approaches, such as ANOVA or non-parametric analogs. As an example, we apply this method to oviposition preference data for butterflies in the genus Lycaeides. Using this method, we estimate the parameters that describe preference for each population, compare the preference hierarchies among populations, and explore various models that group populations that share the same preference hierarchy. PMID:22132077
Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation
ERIC Educational Resources Information Center
Ross, Steven J.; Mackey, Beth
2015-01-01
This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…
A Bayesian network model for biomarker-based dose response.
Hack, C Eric; Haber, Lynne T; Maier, Andrew; Shulte, Paul; Fowler, Bruce; Lotz, W Gregory; Savage, Russell E
2010-07-01
A Bayesian network model was developed to integrate diverse types of data to conduct an exposure-dose-response assessment for benzene-induced acute myeloid leukemia (AML). The network approach was used to evaluate and compare individual biomarkers and quantitatively link the biomarkers along the exposure-disease continuum. The network was used to perform the biomarker-based dose-response analysis, and various other approaches to the dose-response analysis were conducted for comparison. The network-derived benchmark concentration was approximately an order of magnitude lower than that from the usual exposure concentration versus response approach, which suggests that the presence of more information in the low-dose region (where changes in biomarkers are detectable but effects on AML mortality are not) helps inform the description of the AML response at lower exposures. This work provides a quantitative approach for linking changes in biomarkers of effect both to exposure information and to changes in disease response. Such linkage can provide a scientifically valid point of departure that incorporates precursor dose-response information without being dependent on the difficult issue of a definition of adversity for precursors.
Bayesian Analysis of Structural Equation Models with Nonlinear Covariates and Latent Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
In this article, we formulate a nonlinear structural equation model (SEM) that can accommodate covariates in the measurement equation and nonlinear terms of covariates and exogenous latent variables in the structural equation. The covariates can come from continuous or discrete distributions. A Bayesian approach is developed to analyze the…
The Bayesian Evaluation of Categorization Models: Comment on Wills and Pothos (2012)
ERIC Educational Resources Information Center
Vanpaemel, Wolf; Lee, Michael D.
2012-01-01
Wills and Pothos (2012) reviewed approaches to evaluating formal models of categorization, raising a series of worthwhile issues, challenges, and goals. Unfortunately, in discussing these issues and proposing solutions, Wills and Pothos (2012) did not consider Bayesian methods in any detail. This means not only that their review excludes a major…
A Bayesian Joint Model of Menstrual Cycle Length and Fecundity
Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Germaine M. Buck; Louis, Thomas A.
2015-01-01
Summary Menstrual cycle length (MCL) has been shown to play an important role in couple fecundity, which is the biologic capacity for reproduction irrespective of pregnancy intentions. However, a comprehensive assessment of its role requires a fecundity model that accounts for male and female attributes and the couple’s intercourse pattern relative to the ovulation day. To this end, we employ a Bayesian joint model for MCL and pregnancy. MCLs follow a scale multiplied (accelerated) mixture model with Gaussian and Gumbel components; the pregnancy model includes MCL as a covariate and computes the cycle-specific probability of pregnancy in a menstrual cycle conditional on the pattern of intercourse and no previous fertilization. Day-specific fertilization probability is modeled using natural, cubic splines. We analyze data from the Longitudinal Investigation of Fertility and the Environment Study (the LIFE Study), a couple based prospective pregnancy study, and find a statistically significant quadratic relation between fecundity and menstrual cycle length, after adjustment for intercourse pattern and other attributes, including male semen quality, both partner’s age, and active smoking status (determined by baseline cotinine level 100ng/mL). We compare results to those produced by a more basic model and show the advantages of a more comprehensive approach. PMID:26295923
Improving standard practices for prediction in ungauged basins: Bayesian approach
NASA Astrophysics Data System (ADS)
Prieto, Cristina; Le-Vine, Nataliya; García, Eduardo; Medina, Raúl
2015-04-01
In hydrological modelling, the prediction of flows in ungauged basins is still a defiance. Among the different alternatives to quantify and reduce the uncertainty in the predictions, a Bayesian framework has proven to be advantageous. This framework allows flow prediction in ungauged basins based on regionalised hydrological indices. Being grounded on probability theory, the procedure requires a number of assumptions and decisions to be made. Among the most important ones are 1) selection of representative hydrological signatures, 2) selection of regionalization model functional form, and 3) a 'perfect' model/ input assumption. The contribution of this research is to address these three assumptions. First, to reduce an extensive set of available hydrological signatures we select a compact orthogonal set of information pieces using Principal Component Analysis. This advances the standard practice of semi-empirical selection of individual hydrological signatures. Second, we use functional-form-assumption-free Random Forests to regionalize the selected information. This allows the traditional assumption of linear regression between catchment properties and characteristics of hydrological response to be relaxes. And third, we propose utilizing non-traditional metrics to flag-up possible model/ input errors: Bayes' Factor and a newly-proposed 'Suitability' test. This addresses the typical non-realistic assumption that model is 'perfect' and the input is noise-free. The proposed methodological developments are illustrated for the empirical challenge of flow prediction in rivers in Northern Spain.
More Bayesian Transdimensional Inversion for Thermal History Modelling (Invited)
NASA Astrophysics Data System (ADS)
Gallagher, K.
2013-12-01
Since the publication of Dodson (1973) quantifying the relationship between geochronogical ages and closure temperatures, an ongoing concern in thermochronology is reconstruction of thermal histories consistent with the measured data. Extracting this thermal history information is best treated as an inverse problem, given the complex relationship between the observations and the thermal history. When solving the inverse problem (i.e. finding thermal acceptable thermal histories), stochastic sampling methods have often been used, as these are relatively global when searching the model space. However, the issue remains how best to estimate those parts of the thermal history unconstrained by independent information, i.e. what is required to fit the data ? To solve this general problem, we use a Bayesian transdimensional Markov Chain Monte Carlo method and this has been integrated into user-friendly software, QTQt (Quantitative Thermochronology with Qt), which runs on both Macintosh and PC. The Bayesian approach allows us to consider a wide range of possible thermal history as general prior information on time, temperature (and temperature offset for multiple samples in a vertical profile). We can also incorporate more focussed geological constraints in terms of more specific priors. In this framework, it is the data themselves (and their errors) that determine the complexity of the thermal history solutions. For example, more precise data will justify a more complex solution, while more noisy data will be happy with simpler solutions. We can express complexity in terms of the number of time-temperature points defining the total thermal history. Another useful feature of this method is that was can easily deal with imprecise parameter values (e.g. kinetics, data errors), by drawing samples from a user specified probability distribution, rather than using a single value. Finally, the method can be applied to either single samples, or multiple samples (from a borehole or
Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique
2014-01-01
Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Bayesian Semiparametric Structural Equation Models with Latent Variables
ERIC Educational Resources Information Center
Yang, Mingan; Dunson, David B.
2010-01-01
Structural equation models (SEMs) with latent variables are widely useful for sparse covariance structure modeling and for inferring relationships among latent variables. Bayesian SEMs are appealing in allowing for the incorporation of prior information and in providing exact posterior distributions of unknowns, including the latent variables. In…
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation
NASA Astrophysics Data System (ADS)
Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.
2015-12-01
Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
NASA Astrophysics Data System (ADS)
Hobson, Michael P.; Jaffe, Andrew H.; Liddle, Andrew R.; Mukherjee, Pia; Parkinson, David
2014-02-01
Preface; Part I. Methods: 1. Foundations and algorithms John Skilling; 2. Simple applications of Bayesian methods D. S. Sivia and Steve Rawlings; 3. Parameter estimation using Monte Carlo sampling Antony Lewis and Sarah Bridle; 4. Model selection and multi-model interference Andrew R. Liddle, Pia Mukherjee and David Parkinson; 5. Bayesian experimental design and model selection forecasting Roberto Trotta, Martin Kunz, Pia Mukherjee and David Parkinson; 6. Signal separation in cosmology M. P. Hobson, M. A. J. Ashdown and V. Stolyarov; Part II. Applications: 7. Bayesian source extraction M. P. Hobson, Graça Rocha and R. Savage; 8. Flux measurement Daniel Mortlock; 9. Gravitational wave astronomy Neil Cornish; 10. Bayesian analysis of cosmic microwave background data Andrew H. Jaffe; 11. Bayesian multilevel modelling of cosmological populations Thomas J. Loredo and Martin A. Hendry; 12. A Bayesian approach to galaxy evolution studies Stefano Andreon; 13. Photometric redshift estimation: methods and applications Ofer Lahav, Filipe B. Abdalla and Manda Banerji; Index.
NASA Astrophysics Data System (ADS)
Hobson, Michael P.; Jaffe, Andrew H.; Liddle, Andrew R.; Mukherjee, Pia; Parkinson, David
2009-12-01
Preface; Part I. Methods: 1. Foundations and algorithms John Skilling; 2. Simple applications of Bayesian methods D. S. Sivia and Steve Rawlings; 3. Parameter estimation using Monte Carlo sampling Antony Lewis and Sarah Bridle; 4. Model selection and multi-model interference Andrew R. Liddle, Pia Mukherjee and David Parkinson; 5. Bayesian experimental design and model selection forecasting Roberto Trotta, Martin Kunz, Pia Mukherjee and David Parkinson; 6. Signal separation in cosmology M. P. Hobson, M. A. J. Ashdown and V. Stolyarov; Part II. Applications: 7. Bayesian source extraction M. P. Hobson, Graça Rocha and R. Savage; 8. Flux measurement Daniel Mortlock; 9. Gravitational wave astronomy Neil Cornish; 10. Bayesian analysis of cosmic microwave background data Andrew H. Jaffe; 11. Bayesian multilevel modelling of cosmological populations Thomas J. Loredo and Martin A. Hendry; 12. A Bayesian approach to galaxy evolution studies Stefano Andreon; 13. Photometric redshift estimation: methods and applications Ofer Lahav, Filipe B. Abdalla and Manda Banerji; Index.
A Bayesian Model of Category-Specific Emotional Brain Responses
Wager, Tor D.; Kang, Jian; Johnson, Timothy D.; Nichols, Thomas E.; Satpute, Ajay B.; Barrett, Lisa Feldman
2015-01-01
Understanding emotion is critical for a science of healthy and disordered brain function, but the neurophysiological basis of emotional experience is still poorly understood. We analyzed human brain activity patterns from 148 studies of emotion categories (2159 total participants) using a novel hierarchical Bayesian model. The model allowed us to classify which of five categories—fear, anger, disgust, sadness, or happiness—is engaged by a study with 66% accuracy (43-86% across categories). Analyses of the activity patterns encoded in the model revealed that each emotion category is associated with unique, prototypical patterns of activity across multiple brain systems including the cortex, thalamus, amygdala, and other structures. The results indicate that emotion categories are not contained within any one region or system, but are represented as configurations across multiple brain networks. The model provides a precise summary of the prototypical patterns for each emotion category, and demonstrates that a sufficient characterization of emotion categories relies on (a) differential patterns of involvement in neocortical systems that differ between humans and other species, and (b) distinctive patterns of cortical-subcortical interactions. Thus, these findings are incompatible with several contemporary theories of emotion, including those that emphasize emotion-dedicated brain systems and those that propose emotion is localized primarily in subcortical activity. They are consistent with componential and constructionist views, which propose that emotions are differentiated by a combination of perceptual, mnemonic, prospective, and motivational elements. Such brain-based models of emotion provide a foundation for new translational and clinical approaches. PMID:25853490
NASA Astrophysics Data System (ADS)
Smith, T. J.; Marshall, L. A.; Sharma, A.
2011-12-01
Hydrologic modelers are confronted with the challenge of producing estimates of the uncertainty associated with model predictions across a wide array of watersheds, often under very limited data conditions. Statistical methods for hydrologic modeling have evolved rapidly over the recent past in response to these challenges, from improved strategies to both estimate optimal parameter values and predictive uncertainty to approaches that aim to link model parameters to watershed characteristics and allow parameters to be transferred to data-poor watersheds. However, despite the advances that have been made in the application of such statistical tools there remains significant work to be done, particularly regarding the quantification/transfer of predictive uncertainty at/to data-limited locations. Here, we present a hierarchical Bayesian modeling technique referred to as Bayes Empirical Bayes (BEB) as a means of addressing the difficulties in making reliable hydrologic predictions under uncertainty in data-limited watersheds. The BEB technique utilizes formal hierarchical Bayesian analysis (specifically the resultant posterior probability distributions for each estimated model parameter) to pool information from auxiliary watersheds to generate informed probability distributions for each parameter at a data-limited watershed of interest. The application of such a method has thus far been untested in hydrologic applications but has been used more extensively in ecological studies. This technique represents a significant departure from earlier attempts to make predictions in data-limited watersheds in both its usage of available data and its ability to simultaneously quantify predictive uncertainty directly. By utilizing the Bayesian toolkit under a hierarchical approach, information available from auxiliary watersheds can be integrated and summarized into the prediction at the site of interest.
ERIC Educational Resources Information Center
Shuck, Brad; Zigarmi, Drea; Owen, Jesse
2015-01-01
Purpose: The purpose of this study was to empirically examine the utility of self-determination theory (SDT) within the engagement-performance linkage. Design/methodology/approach: Bayesian multi-measurement mediation modeling was used to estimate the relation between SDT, engagement and a proxy measure of performance (e.g. work intentions) (N =…
Bayesian Local Contamination Models for Multivariate Outliers
Page, Garritt L.; Dunson, David B.
2013-01-01
In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465
Forecasting unconventional resource productivity - A spatial Bayesian model
NASA Astrophysics Data System (ADS)
Montgomery, J.; O'sullivan, F.
2015-12-01
Today's low prices mean that unconventional oil and gas development requires ever greater efficiency and better development decision-making. Inter and intra-field variability in well productivity, which is a major contemporary driver of uncertainty regarding resource size and its economics is driven by factors including geological conditions, well and completion design (which companies vary as they seek to optimize their performance), and uncertainty about the nature of fracture propagation. Geological conditions are often not be well understood early on in development campaigns, but nevertheless critical assessments and decisions must be made regarding the value of drilling an area and the placement of wells. In these situations, location provides a reasonable proxy for geology and the "rock quality." We propose a spatial Bayesian model for forecasting acreage quality, which improves decision-making by leveraging available production data and provides a framework for statistically studying the influence of different parameters on well productivity. Our approach consists of subdividing a field into sections and forming prior distributions for productivity in each section based on knowledge about the overall field. Production data from wells is used to update these estimates in a Bayesian fashion, improving model accuracy far more rapidly and with less sensitivity to outliers than a model that simply establishes an "average" productivity in each section. Additionally, forecasts using this model capture the importance of uncertainty—either due to a lack of information or for areas that demonstrate greater geological risk. We demonstrate the forecasting utility of this method using public data and also provide examples of how information from this model can be combined with knowledge about a field's geology or changes in technology to better quantify development risk. This approach represents an important shift in the way that production data is used to guide
Bayesian Joint Modelling for Object Localisation in Weakly Labelled Images.
Shi, Zhiyuan; Hospedales, Timothy M; Xiang, Tao
2015-10-01
We address the problem of localisation of objects as bounding boxes in images and videos with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. In this paper, a novel framework based on Bayesian joint topic modelling is proposed, which differs significantly from the existing ones in that: (1) All foreground object classes are modelled jointly in a single generative model that encodes multiple object co-existence so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) Image backgrounds are shared across classes to better learn varying surroundings and "push out" objects of interest. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Moreover, the Bayesian formulation enables the exploitation of various types of prior knowledge to compensate for the limited supervision offered by weakly labelled data, as well as Bayesian domain adaptation for transfer learning. Extensive experiments on the PASCAL VOC, ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of our Bayesian joint model for weakly supervised object localisation. PMID:26340253
Point source moment tensor inversion through a Bayesian hierarchical model
NASA Astrophysics Data System (ADS)
Mustać, Marija; Tkalčić, Hrvoje
2016-01-01
Characterization of seismic sources is an important aspect of seismology. Parameter uncertainties in such inversions are essential for estimating solution robustness, but are rarely available. We have developed a non-linear moment tensor inversion method in a probabilistic Bayesian framework that also accounts for noise in the data. The method is designed for point source inversion using waveform data of moderate-size earthquakes and explosions at regional distances. This probabilistic approach results in an ensemble of models, whose density is proportional to parameter probability distribution and quantifies parameter uncertainties. Furthermore, we invert for noise in the data, allowing it to determine the model complexity. We implement an empirical noise covariance matrix that accounts for interdependence of observational errors present in waveform data. After we demonstrate the feasibility of the approach on synthetic data, we apply it to a Long Valley Caldera, CA, earthquake with a well-documented anomalous (non-double-couple) radiation from previous studies. We confirm a statistically significant isotropic component in the source without a trade-off with the compensated linear vector dipoles component.
Capturing changes in flood risk with Bayesian approaches for flood damage assessment
NASA Astrophysics Data System (ADS)
Vogel, Kristin; Schröter, Kai; Kreibich, Heidi; Thieken, Annegret; Müller, Meike; Sieg, Tobias; Laudan, Jonas; Kienzler, Sarah; Weise, Laura; Merz, Bruno; Scherbaum, Frank
2016-04-01
Flood risk is a function of hazard as well as of exposure and vulnerability. All three components are under change over space and time and have to be considered for reliable damage estimations and risk analyses, since this is the basis for an efficient, adaptable risk management. Hitherto, models for estimating flood damage are comparatively simple and cannot sufficiently account for changing conditions. The Bayesian network approach allows for a multivariate modeling of complex systems without relying on expert knowledge about physical constraints. In a Bayesian network each model component is considered to be a random variable. The way of interactions between those variables can be learned from observations or be defined by expert knowledge. Even a combination of both is possible. Moreover, the probabilistic framework captures uncertainties related to the prediction and provides a probability distribution for the damage instead of a point estimate. The graphical representation of Bayesian networks helps to study the change of probabilities for changing circumstances and may thus simplify the communication between scientists and public authorities. In the framework of the DFG-Research Training Group "NatRiskChange" we aim to develop Bayesian networks for flood damage and vulnerability assessments of residential buildings and companies under changing conditions. A Bayesian network learned from data, collected over the last 15 years in flooded regions in the Elbe and Danube catchments (Germany), reveals the impact of many variables like building characteristics, precaution and warning situation on flood damage to residential buildings. While the handling of incomplete and hybrid (discrete mixed with continuous) data are the most challenging issues in the study on residential buildings, a similar study, that focuses on the vulnerability of small to medium sized companies, bears new challenges. Relying on a much smaller data set for the determination of the model
Improving default risk prediction using Bayesian model uncertainty techniques.
Kazemi, Reza; Mosleh, Ali
2012-11-01
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. PMID:23163724
Improving default risk prediction using Bayesian model uncertainty techniques.
Kazemi, Reza; Mosleh, Ali
2012-11-01
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis.
Quantifying Uncertainty in Velocity Models using Bayesian Methods
NASA Astrophysics Data System (ADS)
Hobbs, R.; Caiado, C.; Majdański, M.
2008-12-01
Quanitifying uncertainty in models derived from observed data is a major issue. Public and political understanding of uncertainty is poor and for industry inadequate assessment of risk costs money. In this talk we will examine the geological structure of the subsurface, however our principal exploration tool, controlled source seismology, gives its data in time. Inversion tools exist to map these data into a depth model but a full exploration of the uncertainty of the model is rarely done because robust strategies do not exist for large non-linear complex systems. There are two principal sources of uncertainty: the first comes from the input data which is noisy and bandlimited; the second, and more sinister, is from the model parameterisation and forward algorithms themselves, which approximate to the physics to make the problem tractable. To address these issues we propose a Bayesian approach. One philosophy is to estimate the uncertainty in a possible model derived using standard inversion tools. During the inversion stage we can use our geological prejudice to derive an acceptable model. Then we use a local random walk using the Metropolis- Hastings algorithm to explore the model space immediately around a possible solution. For models with a limited number of parameters we can use the forward modeling step from the inversion code. However as the number of parameters increase and/or the cost of the forward modeling step becomes significant, we need to use fast emulators to act as proxies so a sufficient number of iterations can be performed on which to base our statistical measures of uncertainty. In this presentation we show examples of uncertainty estimation using both pre- and post-critical seismic data. In particular, we will demonstrate uncertainty introduced by the approximation of the physics by using a tomographic inversion of bandlimited data and show that uncertainty increases as the central frequency of the data decreases. This is consistent with the
NASA Astrophysics Data System (ADS)
Dries, M.; Trager, S. C.; Koopmans, L. V. E.
2016-08-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov Chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age, and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities, and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well-suited for this.
NASA Astrophysics Data System (ADS)
Dries, M.; Trager, S. C.; Koopmans, L. V. E.
2016-11-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach, we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well suited for this.
Probabilistic detection of volcanic ash using a Bayesian approach
NASA Astrophysics Data System (ADS)
Mackie, Shona; Watson, Matthew
2014-03-01
Airborne volcanic ash can pose a hazard to aviation, agriculture, and both human and animal health. It is therefore important that ash clouds are monitored both day and night, even when they travel far from their source. Infrared satellite data provide perhaps the only means of doing this, and since the hugely expensive ash crisis that followed the 2010 Eyjafjalljökull eruption, much research has been carried out into techniques for discriminating ash in such data and for deriving key properties. Such techniques are generally specific to data from particular sensors, and most approaches result in a binary classification of pixels into "ash" and "ash free" classes with no indication of the classification certainty for individual pixels. Furthermore, almost all operational methods rely on expert-set thresholds to determine what constitutes "ash" and can therefore be criticized for being subjective and dependent on expertise that may not remain with an institution. Very few existing methods exploit available contemporaneous atmospheric data to inform the detection, despite the sensitivity of most techniques to atmospheric parameters. The Bayesian method proposed here does exploit such data and gives a probabilistic, physically based classification. We provide an example of the method's implementation for a scene containing both land and sea observations, and a large area of desert dust (often misidentified as ash by other methods). The technique has already been successfully applied to other detection problems in remote sensing, and this work shows that it will be a useful and effective tool for ash detection.
Helle, Inari; Ahtiainen, Heini; Luoma, Emilia; Hänninen, Maria; Kuikka, Sakari
2015-08-01
Large-scale oil accidents can inflict substantial costs to the society, as they typically result in expensive oil combating and waste treatment operations and have negative impacts on recreational and environmental values. Cost-benefit analysis (CBA) offers a way to assess the economic efficiency of management measures capable of mitigating the adverse effects. However, the irregular occurrence of spills combined with uncertainties related to the possible effects makes the analysis a challenging task. We develop a probabilistic modeling approach for a CBA of oil spill management and apply it in the Gulf of Finland, the Baltic Sea. The model has a causal structure, and it covers a large number of factors relevant to the realistic description of oil spills, as well as the costs of oil combating operations at open sea, shoreline clean-up, and waste treatment activities. Further, to describe the effects on environmental benefits, we use data from a contingent valuation survey. The results encourage seeking for cost-effective preventive measures, and emphasize the importance of the inclusion of the costs related to waste treatment and environmental values in the analysis. Although the model is developed for a specific area, the methodology is applicable also to other areas facing the risk of oil spills as well as to other fields that need to cope with the challenging combination of low probabilities, high losses and major uncertainties. PMID:25983196
Helle, Inari; Ahtiainen, Heini; Luoma, Emilia; Hänninen, Maria; Kuikka, Sakari
2015-08-01
Large-scale oil accidents can inflict substantial costs to the society, as they typically result in expensive oil combating and waste treatment operations and have negative impacts on recreational and environmental values. Cost-benefit analysis (CBA) offers a way to assess the economic efficiency of management measures capable of mitigating the adverse effects. However, the irregular occurrence of spills combined with uncertainties related to the possible effects makes the analysis a challenging task. We develop a probabilistic modeling approach for a CBA of oil spill management and apply it in the Gulf of Finland, the Baltic Sea. The model has a causal structure, and it covers a large number of factors relevant to the realistic description of oil spills, as well as the costs of oil combating operations at open sea, shoreline clean-up, and waste treatment activities. Further, to describe the effects on environmental benefits, we use data from a contingent valuation survey. The results encourage seeking for cost-effective preventive measures, and emphasize the importance of the inclusion of the costs related to waste treatment and environmental values in the analysis. Although the model is developed for a specific area, the methodology is applicable also to other areas facing the risk of oil spills as well as to other fields that need to cope with the challenging combination of low probabilities, high losses and major uncertainties.
Open Source Bayesian Models. 3. Composite Models for Prediction of Binned Responses
2016-01-01
Bayesian models constructed from structure-derived fingerprints have been a popular and useful method for drug discovery research when applied to bioactivity measurements that can be effectively classified as active or inactive. The results can be used to rank candidate structures according to their probability of activity, and this ranking benefits from the high degree of interpretability when structure-based fingerprints are used, making the results chemically intuitive. Besides selecting an activity threshold, building a Bayesian model is fast and requires few or no parameters or user intervention. The method also does not suffer from such acute overtraining problems as quantitative structure–activity relationships or quantitative structure–property relationships (QSAR/QSPR). This makes it an approach highly suitable for automated workflows that are independent of user expertise or prior knowledge of the training data. We now describe a new method for creating a composite group of Bayesian models to extend the method to work with multiple states, rather than just binary. Incoming activities are divided into bins, each covering a mutually exclusive range of activities. For each of these bins, a Bayesian model is created to model whether or not the compound belongs in the bin. Analyzing putative molecules using the composite model involves making a prediction for each bin and examining the relative likelihood for each assignment, for example, highest value wins. The method has been evaluated on a collection of hundreds of data sets extracted from ChEMBL v20 and validated data sets for ADME/Tox and bioactivity. PMID:26750305
A Bayesian approach for convex combination of two Gumbel-Barnett copulas
NASA Astrophysics Data System (ADS)
Fernández, M.; González-López, V. A.
2013-10-01
In this paper it was applied a new Bayesian approach to model the dependence between two variables of interest in public policy: "Gonorrhea Rates per 100,000 Population" and "400% Federal Poverty Level and over" with a small number of paired observations (one pair for each U.S. state). We use a mixture of Gumbel-Barnett copulas suitable to represent situations with weak and negative dependence, which is the case treated here. The methodology allows even making a prediction of the dependence between the variables from one year to another, showing whether there was any alteration in the dependence.
ERIC Educational Resources Information Center
Aslan, Burak Galip; Öztürk, Özlem; Inceoglu, Mustafa Murat
2014-01-01
Considering the increasing importance of adaptive approaches in CALL systems, this study implemented a machine learning based student modeling middleware with Bayesian networks. The profiling approach of the student modeling system is based on Felder and Silverman's Learning Styles Model and Felder and Soloman's Index of Learning Styles…
Bayesian IRT Guessing Models for Partial Guessing Behaviors
ERIC Educational Resources Information Center
Cao, Jing; Stokes, S. Lynne
2008-01-01
According to the recent Nation's Report Card, 12th-graders failed to produce gains on the 2005 National Assessment of Educational Progress (NAEP) despite earning better grades on average. One possible explanation is that 12th-graders were not motivated taking the NAEP, which is a low-stakes test. We develop three Bayesian IRT mixture models to…
Shortlist B: A Bayesian Model of Continuous Speech Recognition
ERIC Educational Resources Information Center
Norris, Dennis; McQueen, James M.
2008-01-01
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward…
Bayesian shared frailty models for regional inference about wildlife survival
Heisey, D.M.
2012-01-01
One can joke that 'exciting statistics' is an oxymoron, but it is neither a joke nor an exaggeration to say that these are exciting times to be involved in statistical ecology. As Halstead et al.'s (2012) paper nicely exemplifies, recently developed Bayesian analyses can now be used to extract insights from data using techniques that would have been unavailable to the ecological researcher just a decade ago. Some object to this, implying that the subjective priors of the Bayesian approach is the pathway to perdition (e.g. Lele & Dennis, 2009). It is reasonable to ask whether these new approaches are really giving us anything that we could not obtain with traditional tried-and-true frequentist approaches. I believe the answer is a clear yes.
Lin, Lin; Chan, Cliburn; West, Mike
2016-01-01
We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets.
Lin, Lin; Chan, Cliburn; West, Mike
2016-01-01
We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets. PMID:26040910
Estimating genealogies from linked marker data: a Bayesian approach
Gasbarra, Dario; Pirinen, Matti; Sillanpää, Mikko J; Arjas, Elja
2007-01-01
Background Answers to several fundamental questions in statistical genetics would ideally require knowledge of the ancestral pedigree and of the gene flow therein. A few examples of such questions are haplotype estimation, relatedness and relationship estimation, gene mapping by combining pedigree and linkage disequilibrium information, and estimation of population structure. Results We present a probabilistic method for genealogy reconstruction. Starting with a group of genotyped individuals from some population isolate, we explore the state space of their possible ancestral histories under our Bayesian model by using Markov chain Monte Carlo (MCMC) sampling techniques. The main contribution of our work is the development of sampling algorithms in the resulting vast state space with highly dependent variables. The main drawback is the computational complexity that limits the time horizon within which explicit reconstructions can be carried out in practice. Conclusion The estimates for IBD (identity-by-descent) and haplotype distributions are tested in several settings using simulated data. The results appear to be promising for a further development of the method. PMID:17961219
A Bayesian A-optimal and model robust design criterion.
Zhou, Xiaojie; Joseph, Lawrence; Wolfson, David B; Bélisle, Patrick
2003-12-01
Suppose that the true model underlying a set of data is one of a finite set of candidate models, and that parameter estimation for this model is of primary interest. With this goal, optimal design must depend on a loss function across all possible models. A common method that accounts for model uncertainty is to average the loss over all models; this is the basis of what is known as Läuter's criterion. We generalize Läuter's criterion and show that it can be placed in a Bayesian decision theoretic framework, by extending the definition of Bayesian A-optimality. We use this generalized A-optimality to find optimal design points in an environmental safety setting. In estimating the smallest detectable trace limit in a water contamination problem, we obtain optimal designs that are quite different from those suggested by standard A-optimality.
Fu, Zhibiao; Baker, Daniel; Cheng, Aili; Leighton, Julie; Appelbaum, Edward; Aon, Juan
2016-05-01
The principle of quality by design (QbD) has been widely applied to biopharmaceutical manufacturing processes. Process characterization is an essential step to implement the QbD concept to establish the design space and to define the proven acceptable ranges (PAR) for critical process parameters (CPPs). In this study, we present characterization of a Saccharomyces cerevisiae fermentation process using risk assessment analysis, statistical design of experiments (DoE), and the multivariate Bayesian predictive approach. The critical quality attributes (CQAs) and CPPs were identified with a risk assessment. The statistical model for each attribute was established using the results from the DoE study with consideration given to interactions between CPPs. Both the conventional overlapping contour plot and the multivariate Bayesian predictive approaches were used to establish the region of process operating conditions where all attributes met their specifications simultaneously. The quantitative Bayesian predictive approach was chosen to define the PARs for the CPPs, which apply to the manufacturing control strategy. Experience from the 10,000 L manufacturing scale process validation, including 64 continued process verification batches, indicates that the CPPs remain under a state of control and within the established PARs. The end product quality attributes were within their drug substance specifications. The probability generated with the Bayesian approach was also used as a tool to assess CPP deviations. This approach can be extended to develop other production process characterization and quantify a reliable operating region. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:799-812, 2016.
Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Huff, Daniel W.
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time
MacNeilage, Paul R.; Ganesan, Narayan; Angelaki, Dora E.
2008-01-01
Spatial orientation is the sense of body orientation and self-motion relative to the stationary environment, fundamental to normal waking behavior and control of everyday motor actions including eye movements, postural control, and locomotion. The brain achieves spatial orientation by integrating visual, vestibular, and somatosensory signals. Over the past years, considerable progress has been made toward understanding how these signals are processed by the brain using multiple computational approaches that include frequency domain analysis, the concept of internal models, observer theory, Bayesian theory, and Kalman filtering. Here we put these approaches in context by examining the specific questions that can be addressed by each technique and some of the scientific insights that have resulted. We conclude with a recent application of particle filtering, a probabilistic simulation technique that aims to generate the most likely state estimates by incorporating internal models of sensor dynamics and physical laws and noise associated with sensory processing as well as prior knowledge or experience. In this framework, priors for low angular velocity and linear acceleration can explain the phenomena of velocity storage and frequency segregation, both of which have been modeled previously using arbitrary low-pass filtering. How Kalman and particle filters may be implemented by the brain is an emerging field. Unlike past neurophysiological research that has aimed to characterize mean responses of single neurons, investigations of dynamic Bayesian inference should attempt to characterize population activities that constitute probabilistic representations of sensory and prior information. PMID:18842952
Assessment of uncertainty in chemical models by Bayesian probabilities: Why, when, how?
Sahlin, Ullrika
2015-07-01
A prediction of a chemical property or activity is subject to uncertainty. Which type of uncertainties to consider, whether to account for them in a differentiated manner and with which methods, depends on the practical context. In chemical modelling, general guidance of the assessment of uncertainty is hindered by the high variety in underlying modelling algorithms, high-dimensionality problems, the acknowledgement of both qualitative and quantitative dimensions of uncertainty, and the fact that statistics offers alternative principles for uncertainty quantification. Here, a view of the assessment of uncertainty in predictions is presented with the aim to overcome these issues. The assessment sets out to quantify uncertainty representing error in predictions and is based on probability modelling of errors where uncertainty is measured by Bayesian probabilities. Even though well motivated, the choice to use Bayesian probabilities is a challenge to statistics and chemical modelling. Fully Bayesian modelling, Bayesian meta-modelling and bootstrapping are discussed as possible approaches. Deciding how to assess uncertainty is an active choice, and should not be constrained by traditions or lack of validated and reliable ways of doing it.
Bayesian state space models for dynamic genetic network construction across multiple tissues.
Liang, Yulan; Kelemen, Arpad
2016-08-01
Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes. PMID:27343475
Bayesian state space models for dynamic genetic network construction across multiple tissues.
Liang, Yulan; Kelemen, Arpad
2016-08-01
Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes.
Probabilistic detection of volcanic ash using a Bayesian approach
Mackie, Shona; Watson, Matthew
2014-01-01
Airborne volcanic ash can pose a hazard to aviation, agriculture, and both human and animal health. It is therefore important that ash clouds are monitored both day and night, even when they travel far from their source. Infrared satellite data provide perhaps the only means of doing this, and since the hugely expensive ash crisis that followed the 2010 Eyjafjalljökull eruption, much research has been carried out into techniques for discriminating ash in such data and for deriving key properties. Such techniques are generally specific to data from particular sensors, and most approaches result in a binary classification of pixels into “ash” and “ash free” classes with no indication of the classification certainty for individual pixels. Furthermore, almost all operational methods rely on expert-set thresholds to determine what constitutes “ash” and can therefore be criticized for being subjective and dependent on expertise that may not remain with an institution. Very few existing methods exploit available contemporaneous atmospheric data to inform the detection, despite the sensitivity of most techniques to atmospheric parameters. The Bayesian method proposed here does exploit such data and gives a probabilistic, physically based classification. We provide an example of the method's implementation for a scene containing both land and sea observations, and a large area of desert dust (often misidentified as ash by other methods). The technique has already been successfully applied to other detection problems in remote sensing, and this work shows that it will be a useful and effective tool for ash detection. Key Points Presentation of a probabilistic volcanic ash detection scheme Method for calculation of probability density function for ash observations Demonstration of a remote sensing technique for monitoring volcanic ash hazards PMID:25844278
Bayesian clustering of fuzzy feature vectors using a quasi-likelihood approach.
Marttinen, Pekka; Tang, Jing; De Baets, Bernard; Dawyndt, Peter; Corander, Jukka
2009-01-01
Bayesian model-based classifiers, both unsupervised and supervised, have been studied extensively and their value and versatility have been demonstrated on a wide spectrum of applications within science and engineering. A majority of the classifiers are built on the assumption of intrinsic discreteness of the considered data features or on the discretization of them prior to the modeling. On the other hand, Gaussian mixture classifiers have also been utilized to a large extent for continuous features in the Bayesian framework. Often the primary reason for discretization in the classification context is the simplification of the analytical and numerical properties of the models. However, the discretization can be problematic due to its \\textit{ad hoc} nature and the decreased statistical power to detect the correct classes in the resulting procedure. We introduce an unsupervised classification approach for fuzzy feature vectors that utilizes a discrete model structure while preserving the continuous characteristics of data. This is achieved by replacing the ordinary likelihood by a binomial quasi-likelihood to yield an analytical expression for the posterior probability of a given clustering solution. The resulting model can be justified from an information-theoretic perspective. Our method is shown to yield highly accurate clusterings for challenging synthetic and empirical data sets. PMID:19029547
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
The behavior of complex aerospace systems is governed by numerous parameters. For safety analysis it is important to understand how the system behaves with respect to these parameter values. In particular, understanding the boundaries between safe and unsafe regions is of major importance. In this paper, we describe a hierarchical Bayesian statistical modeling approach for the online detection and characterization of such boundaries. Our method for classification with active learning uses a particle filter-based model and a boundary-aware metric for best performance. From a library of candidate shapes incorporated with domain expert knowledge, the location and parameters of the boundaries are estimated using advanced Bayesian modeling techniques. The results of our boundary analysis are then provided in a form understandable by the domain expert. We illustrate our approach using a simulation model of a NASA neuro-adaptive flight control system, as well as a system for the detection of separation violations in the terminal airspace.
Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements
Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.
2014-12-01
As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.
Phosphorus load estimation in the Saginaw River, MI using a Bayesian hierarchical/multilevel model.
Cha, YoonKyung; Stow, Craig A; Reckhow, Kenneth H; DeMarchi, Carlo; Johengen, Thomas H
2010-05-01
We propose the use of Bayesian hierarchical/multilevel ratio approach to estimate the annual riverine phosphorus loads in the Saginaw River, Michigan, from 1968 to 2008. The ratio estimator is known to be an unbiased, precise approach for differing flow-concentration relationships and sampling schemes. A Bayesian model can explicitly address the uncertainty in prediction by using a posterior predictive distribution, while in comparison, a Bayesian hierarchical technique can overcome the limitation of interpreting the estimated annual loads inferred from small sample sizes by borrowing strength from the underlying population shared by the years of interest. Thus, by combining the ratio estimator with the Bayesian hierarchical modeling framework, long-term loads estimation can be addressed with explicit quantification of uncertainty. Our study results indicate a slight decrease in total phosphorus load early in the series. The estimated ratio parameter, which can be interpreted as flow-weighted concentration, shows a clearer decrease, damping the noise that yearly flow variation adds to the load. Despite the reductions, it is not likely that Saginaw Bay meets with its target phosphorus load, 440 tonnes/yr. Throughout the decades, the probabilities of the Saginaw Bay not complying with the target load are estimated as 1.00, 0.50, 0.57 and 0.36 in 1977, 1987, 1997, and 2007, respectively. We show that the Bayesian hierarchical model results in reasonable goodness-of-fits to the observations whether or not individual loads are aggregated. Also, this modeling approach can substantially reduce uncertainties associated with small sample sizes both in the estimated parameters and loads. PMID:20382406
A Bayesian approach to estimating causal vaccine effects on binary post-infection outcomes.
Zhou, Jincheng; Chu, Haitao; Hudgens, Michael G; Halloran, M Elizabeth
2016-01-15
To estimate causal effects of vaccine on post-infection outcomes, Hudgens and Halloran (2006) defined a post-infection causal vaccine efficacy estimand VEI based on the principal stratification framework. They also derived closed forms for the maximum likelihood estimators of the causal estimand under some assumptions. Extending their research, we propose a Bayesian approach to estimating the causal vaccine effects on binary post-infection outcomes. The identifiability of the causal vaccine effect VEI is discussed under different assumptions on selection bias. The performance of the proposed Bayesian method is compared with the maximum likelihood method through simulation studies and two case studies - a clinical trial of a rotavirus vaccine candidate and a field study of pertussis vaccination. For both case studies, the Bayesian approach provided similar inference as the frequentist analysis. However, simulation studies with small sample sizes suggest that the Bayesian approach provides smaller bias and shorter confidence interval length.
A Bayesian approach for calculating variable total maximum daily loads and uncertainty assessment.
Chen, Dingjiang; Dahlgren, Randy A; Shen, Yena; Lu, Jun
2012-07-15
To account for both variability and uncertainty in nonpoint source pollution, one dimensional water quality model was integrated with Bayesian statistics and load duration curve methods to develop a variable total maximum daily load (TMDL) for total nitrogen (TN). Bayesian statistics was adopted to inversely calibrate the unknown parameters in the model, i.e., area-specific export rate (E) and in-stream loss rate coefficient (K) for TN, from the stream monitoring data. Prior distributions for E and K based on published measurements were developed to support Bayesian parameter calibration. Then the resulting E and K values were used in water quality model for simulation of catchment TN export load, TMDL and required load reduction along with their uncertainties in the ChangLe River agricultural watershed in eastern China. Results indicated that the export load, TMDL and required load reduction for TN synchronously increased with increasing stream water discharge. The uncertainties associated with these estimates also presented temporal variability with higher uncertainties for the high flow regime and lower uncertainties for the low flow regime. To assure 90% compliance with the targeted in-stream TN concentration of 2.0mgL(-1), the required load reduction was determined to be 1.7 × 10(3), 4.6 × 10(3), and 14.6 × 10(3)kg TNd (-1) for low, median and high flow regimes, respectively. The integrated modeling approach developed in this study allows decision makers to determine the required load reduction for different TN compliance levels while incorporating both flow-dependent variability and uncertainty assessment to support practical adaptive implementation of TMDL programs.
Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...
Exemplar models as a mechanism for performing Bayesian inference.
Shi, Lei; Griffiths, Thomas L; Feldman, Naomi H; Sanborn, Adam N
2010-08-01
Probabilistic models have recently received much attention as accounts of human cognition. However, most research in which probabilistic models have been used has been focused on formulating the abstract problems behind cognitive tasks and their optimal solutions, rather than on mechanisms that could implement these solutions. Exemplar models are a successful class of psychological process models in which an inventory of stored examples is used to solve problems such as identification, categorization, and function learning. We show that exemplar models can be used to perform a sophisticated form of Monte Carlo approximation known as importance sampling and thus provide a way to perform approximate Bayesian inference. Simulations of Bayesian inference in speech perception, generalization along a single dimension, making predictions about everyday events, concept learning, and reconstruction from memory show that exemplar models can often account for human performance with only a few exemplars, for both simple and relatively complex prior distributions. These results suggest that exemplar models provide a possible mechanism for implementing at least some forms of Bayesian inference. PMID:20702863
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models
Daunizeau, J.; Friston, K.J.; Kiebel, S.J.
2009-01-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power. PMID:19862351
A Semi-parametric Bayesian Approach for Differential Expression Analysis of RNA-seq Data
Liu, Fangfang; Wang, Chong
2016-01-01
RNA-sequencing (RNA-seq) technologies have revolutionized the way agricultural biologists study gene expression as well as generated a tremendous amount of data waiting for analysis. Detecting differentially expressed genes is one of the fundamental steps in RNA-seq data analysis. In this paper, we model the count data from RNA-seq experiments with a Poisson-Gamma hierarchical model, or equivalently, a negative binomial (NB) model. We derive a semi-parametric Bayesian approach with a Dirichlet process as the prior model for the distribution of fold changes between the two treatment means. An inference strategy using Gibbs algorithm is developed for differential expression analysis. The results of several simulation studies show that our proposed method outperforms other methods including the popularly applied edgeR and DESeq methods. We also discuss an application of our method to a dataset that compares gene expression between bundle sheath and mesophyll cells in maize leaves. PMID:27570441
Bayesian methods for assessing system reliability: models and computation.
Graves, T. L.; Hamada, Michael,
2004-01-01
There are many challenges with assessing the reliability of a system today. These challenges arise because a system may be aging and full system tests may be too expensive or can no longer be performed. Without full system testing, one must integrate (1) all science and engineering knowledge, models and simulations, (2) information and data at various levels of the system, e.g., subsystems and components and (3) information and data from similar systems, subsystems and components. The analyst must work with various data types and how the data are collected, account for measurement bias and uncertainty, deal with model and simulation uncertainty and incorporate expert knowledge. Bayesian hierarchical modeling provides a rigorous way to combine information from multiple sources and different types of information. However, an obstacle to applying Bayesian methods is the need to develop new software to analyze novel statistical models. We discuss a new statistical modeling environment, YADAS, that facilitates the development of Bayesian statistical analyses. It includes classes that help analysts specify new models, as well as classes that support the creation of new analysis algorithms. We illustrate these concepts using several examples.
Kharroubi, Samer A; Brennan, Alan; Strong, Mark
2011-01-01
Expected value of sample information (EVSI) involves simulating data collection, Bayesian updating, and reexamining decisions. Bayesian updating in incomplete data models typically requires Markov chain Monte Carlo (MCMC). This article describes a revision to a form of Bayesian Laplace approximation for EVSI computation to support decisions in incomplete data models. The authors develop the approximation, setting out the mathematics for the likelihood and log posterior density function, which are necessary for the method. They compare the accuracy of EVSI estimates in a case study cost-effectiveness model using first- and second-order versions of their approximation formula and traditional Monte Carlo. Computational efficiency gains depend on the complexity of the net benefit functions, the number of inner-level Monte Carlo samples used, and the requirement or otherwise for MCMC methods to produce the posterior distributions. This methodology provides a new and valuable approach for EVSI computation in health economic decision models and potential wider benefits in many fields requiring Bayesian approximation. PMID:21512189
HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting
NASA Astrophysics Data System (ADS)
Zwart, Jonathan T. L.; Price, Daniel; Bernardi, Gianni
2016-06-01
HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.
Bayesian point event modeling in spatial and environmental epidemiology.
Lawson, Andrew B
2012-10-01
This paper reviews the current state of point event modeling in spatial epidemiology from a Bayesian perspective. Point event (or case event) data arise when geo-coded addresses of disease events are available. Often, this level of spatial resolution would not be accessible due to medical confidentiality constraints. However, for the examination of small spatial scales, it is important to be capable of examining point process data directly. Models for such data are usually formulated based on point process theory. In addition, special conditioning arguments can lead to simpler Bernoulli likelihoods and logistic spatial models. Goodness-of-fit diagnostics and Bayesian residuals are also considered. Applications within putative health hazard risk assessment, cluster detection, and linkage to environmental risk fields (misalignment) are considered.
Integrated survival analysis using an event-time approach in a Bayesian framework
Walsh, Daniel P.; Dreitz, VJ; Heisey, Dennis M.
2015-01-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the
Integrated survival analysis using an event-time approach in a Bayesian framework
Walsh, Daniel P; Dreitz, Victoria J; Heisey, Dennis M
2015-01-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the
Integrated survival analysis using an event-time approach in a Bayesian framework.
Walsh, Daniel P; Dreitz, Victoria J; Heisey, Dennis M
2015-02-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the
Nursing Home Care Quality: Insights from a Bayesian Network Approach
ERIC Educational Resources Information Center
Goodson, Justin; Jang, Wooseung; Rantz, Marilyn
2008-01-01
Purpose: The purpose of this research is twofold. The first purpose is to utilize a new methodology (Bayesian networks) for aggregating various quality indicators to measure the overall quality of care in nursing homes. The second is to provide new insight into the relationships that exist among various measures of quality and how such measures…
Bayesian model updating using incomplete modal data without mode matching
NASA Astrophysics Data System (ADS)
Sun, Hao; Büyüköztürk, Oral
2016-04-01
This study investigates a new probabilistic strategy for model updating using incomplete modal data. A hierarchical Bayesian inference is employed to model the updating problem. A Markov chain Monte Carlo technique with adaptive random-work steps is used to draw parameter samples for uncertainty quantification. Mode matching between measured and predicted modal quantities is not required through model reduction. We employ an iterated improved reduced system technique for model reduction. The reduced model retains the dynamic features as close as possible to those of the model before reduction. The proposed algorithm is finally validated by an experimental example.
Motor unit number estimation--a Bayesian approach.
Ridall, P Gareth; Pettitt, Anthony N; Henderson, Robert D; McCombe, Pamela A
2006-12-01
of the threshold, the variability between and within single MUAPs, and baseline variability. Our model not only gives the most probable number of motor units but also provides information about both the population of units and individual units. We use Markov chain Monte Carlo to obtain information about the characteristics of individual motor units and about the population of motor units and the Bayesian information criterion for MUNE. We test our method of MUNE on three subjects. Our method provides a reproducible estimate for a patient with stable but severe ALS. In a serial study, we demonstrate a decline in the number of motor unit numbers with a patient with rapidly advancing disease. Finally, with our last patient, we show that our method has the capacity to estimate a larger number of motor units. PMID:17156299
A Bayesian Approach to Identifying New Risk Factors for Dementia
Wen, Yen-Hsia; Wu, Shihn-Sheng; Lin, Chun-Hung Richard; Tsai, Jui-Hsiu; Yang, Pinchen; Chang, Yang-Pei; Tseng, Kuan-Hua
2016-01-01
Abstract Dementia is one of the most disabling and burdensome health conditions worldwide. In this study, we identified new potential risk factors for dementia from nationwide longitudinal population-based data by using Bayesian statistics. We first tested the consistency of the results obtained using Bayesian statistics with those obtained using classical frequentist probability for 4 recognized risk factors for dementia, namely severe head injury, depression, diabetes mellitus, and vascular diseases. Then, we used Bayesian statistics to verify 2 new potential risk factors for dementia, namely hearing loss and senile cataract, determined from the Taiwan's National Health Insurance Research Database. We included a total of 6546 (6.0%) patients diagnosed with dementia. We observed older age, female sex, and lower income as independent risk factors for dementia. Moreover, we verified the 4 recognized risk factors for dementia in the older Taiwanese population; their odds ratios (ORs) ranged from 3.469 to 1.207. Furthermore, we observed that hearing loss (OR = 1.577) and senile cataract (OR = 1.549) were associated with an increased risk of dementia. We found that the results obtained using Bayesian statistics for assessing risk factors for dementia, such as head injury, depression, DM, and vascular diseases, were consistent with those obtained using classical frequentist probability. Moreover, hearing loss and senile cataract were found to be potential risk factors for dementia in the older Taiwanese population. Bayesian statistics could help clinicians explore other potential risk factors for dementia and for developing appropriate treatment strategies for these patients. PMID:27227925
Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2015-01-01
Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system
Assessment of substitution model adequacy using frequentist and Bayesian methods.
Ripplinger, Jennifer; Sullivan, Jack
2010-12-01
In order to have confidence in model-based phylogenetic methods, such as maximum likelihood (ML) and Bayesian analyses, one must use an appropriate model of molecular evolution identified using statistically rigorous criteria. Although model selection methods such as the likelihood ratio test and Akaike information criterion are widely used in the phylogenetic literature, model selection methods lack the ability to reject all models if they provide an inadequate fit to the data. There are two methods, however, that assess absolute model adequacy, the frequentist Goldman-Cox (GC) test and Bayesian posterior predictive simulations (PPSs), which are commonly used in conjunction with the multinomial log likelihood test statistic. In this study, we use empirical and simulated data to evaluate the adequacy of common substitution models using both frequentist and Bayesian methods and compare the results with those obtained with model selection methods. In addition, we investigate the relationship between model adequacy and performance in ML and Bayesian analyses in terms of topology, branch lengths, and bipartition support. We show that tests of model adequacy based on the multinomial likelihood often fail to reject simple substitution models, especially when the models incorporate among-site rate variation (ASRV), and normally fail to reject less complex models than those chosen by model selection methods. In addition, we find that PPSs often fail to reject simpler models than the GC test. Use of the simplest substitution models not rejected based on fit normally results in similar but divergent estimates of tree topology and branch lengths. In addition, use of the simplest adequate substitution models can affect estimates of bipartition support, although these differences are often small with the largest differences confined to poorly supported nodes. We also find that alternative assumptions about ASRV can affect tree topology, tree length, and bipartition support. Our
Toni, Tina; Welch, David; Strelkowa, Natalja; Ipsen, Andreas; Stumpf, Michael P.H.
2008-01-01
Approximate Bayesian computation (ABC) methods can be used to evaluate posterior distributions without having to calculate likelihoods. In this paper, we discuss and apply an ABC method based on sequential Monte Carlo (SMC) to estimate parameters of dynamical models. We show that ABC SMC provides information about the inferability of parameters and model sensitivity to changes in parameters, and tends to perform better than other ABC approaches. The algorithm is applied to several well-known biological systems, for which parameters and their credible intervals are inferred. Moreover, we develop ABC SMC as a tool for model selection; given a range of different mathematical descriptions, ABC SMC is able to choose the best model using the standard Bayesian model selection apparatus. PMID:19205079
A hierarchical Bayesian-MAP approach to inverse problems in imaging
NASA Astrophysics Data System (ADS)
Raj, Raghu G.
2016-07-01
We present a novel approach to inverse problems in imaging based on a hierarchical Bayesian-MAP (HB-MAP) formulation. In this paper we specifically focus on the difficult and basic inverse problem of multi-sensor (tomographic) imaging wherein the source object of interest is viewed from multiple directions by independent sensors. Given the measurements recorded by these sensors, the problem is to reconstruct the image (of the object) with a high degree of fidelity. We employ a probabilistic graphical modeling extension of the compound Gaussian distribution as a global image prior into a hierarchical Bayesian inference procedure. Since the prior employed by our HB-MAP algorithm is general enough to subsume a wide class of priors including those typically employed in compressive sensing (CS) algorithms, HB-MAP algorithm offers a vehicle to extend the capabilities of current CS algorithms to include truly global priors. After rigorously deriving the regression algorithm for solving our inverse problem from first principles, we demonstrate the performance of the HB-MAP algorithm on Monte Carlo trials and on real empirical data (natural scenes). In all cases we find that our algorithm outperforms previous approaches in the literature including filtered back-projection and a variety of state-of-the-art CS algorithms. We conclude with directions of future research emanating from this work.
Denoising peptide tandem mass spectra for spectral libraries: a Bayesian approach.
Shao, Wenguang; Lam, Henry
2013-07-01
With the rapid accumulation of data from shotgun proteomics experiments, it has become feasible to build comprehensive and high-quality spectral libraries of tandem mass spectra of peptides. A spectral library condenses experimental data into a retrievable format and can be used to aid peptide identification by spectral library searching. A key step in spectral library building is spectrum denoising, which is best accomplished by merging multiple replicates of the same peptide ion into a consensus spectrum. However, this approach cannot be applied to "singleton spectra," for which only one observed spectrum is available for the peptide ion. We developed a method, based on a Bayesian classifier, for denoising peptide tandem mass spectra. The classifier accounts for relationships between peaks, and can be trained on the fly from consensus spectra and immediately applied to denoise singleton spectra, without hard-coded knowledge about peptide fragmentation. A linear regression model was also trained to predict the number of useful "signal" peaks in a spectrum, thereby obviating the need for arbitrary thresholds for peak filtering. This Bayesian approach accumulates weak evidence systematically to boost the discrimination power between signal and noise peaks, and produces readily interpretable conditional probabilities that offer valuable insights into peptide fragmentation behaviors. By cross validation, spectra denoised by this method were shown to retain more signal peaks, and have higher spectral similarities to replicates, than those filtered by intensity only.
Bayesian Inference of High-Dimensional Dynamical Ocean Models
NASA Astrophysics Data System (ADS)
Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.
2015-12-01
This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.
Wu, Wei; Chen, Zhe; Gao, Shangkai; Brown, Emery N.
2011-01-01
Multichannel electroencephalography (EEG) offers a non-invasive tool to explore spatio-temporal dynamics of brain activity. With EEG recordings consisting of multiple trials, traditional signal processing approaches that ignore inter-trial variability in the data may fail to accurately estimate the underlying spatio-temporal brain patterns. Moreover, precise characterization of such inter-trial variability per se can be of high scientific value in establishing the relationship between brain activity and behavior. In this paper, a statistical modeling framework is introduced for learning spatiotemporal decomposition of multiple-trial EEG data recorded under two contrasting experimental conditions. By modeling the variance of source signals as random variables varying across trials, the proposed two-stage hierarchical Bayesian model is able to capture inter-trial amplitude variability in the data in a sparse way where a parsimonious representation of the data can be obtained. A variational Bayesian (VB) algorithm is developed for statistical inference of the hierarchical model. The efficacy of the proposed modeling framework is validated with the analysis of both synthetic and real EEG data. In the simulation study we show that even at low signal-to-noise ratios our approach is able to recover with high precision the underlying spatiotemporal patterns and the evolution of source amplitude across trials; on two brain-computer interface (BCI) data sets we show that our VB algorithm can extract physiologically meaningful spatio-temporal patterns and make more accurate predictions than other two widely used algorithms: the common spatial patterns (CSP) algorithm and the Infomax algorithm for independent component analysis (ICA). The results demonstrate that our statistical modeling framework can serve as a powerful tool for extracting brain patterns, characterizing trial-to-trial brain dynamics, and decoding brain states by exploiting useful structures in the data. PMID
RELION: Implementation of a Bayesian approach to cryo-EM structure determination
Scheres, Sjors H.W.
2012-01-01
RELION, for REgularized LIkelihood OptimizatioN, is an open-source computer program for the refinement of macromolecular structures by single-particle analysis of electron cryo-microscopy (cryo-EM) data. Whereas alternative approaches often rely on user expertise for the tuning of parameters, RELION uses a Bayesian approach to infer parameters of a statistical model from the data. This paper describes developments that reduce the computational costs of the underlying maximum a posteriori (MAP) algorithm, as well as statistical considerations that yield new insights into the accuracy with which the relative orientations of individual particles may be determined. A so-called gold-standard Fourier shell correlation (FSC) procedure to prevent overfitting is also described. The resulting implementation yields high-quality reconstructions and reliable resolution estimates with minimal user intervention and at acceptable computational costs. PMID:23000701
A Bayesian Approach for Apparent Inter-plate Coupling in the Central Andes Subduction Zone
NASA Astrophysics Data System (ADS)
Ortega Culaciati, F. H.; Simons, M.; Genrich, J. F.; Galetzka, J.; Comte, D.; Glass, B.; Leiva, C.; Gonzalez, G.; Norabuena, E. O.
2010-12-01
We aim to characterize the extent of apparent plate coupling on the subduction zone megathrust with the eventual goal of understanding spatial variations of fault zone rheology, inferring relationships between apparent coupling and the rupture zone of big earthquakes, as well as the implications for earthquake and tsunami hazard. Unlike previous studies, we approach the problem from a Bayesian perspective, allowing us to completely characterize the model parameter space by searching a posteriori estimates of the range of allowable models instead of seeking a single optimum model. Two important features of the Bayesian approach are the possibility to easily implement any kind of physically plausible a priori information and to perform the inversion without regularization, other than that imposed by the way in which we parameterize the forward model. Adopting a simple kinematic back-slip model and a 3D geometry of the inter-plate contact zone, we can estimate the probability of apparent coupling (Pc) along the plate interface that is consistent with a priori information (e.g., approximate rake of back-slip) and available geodetic measurements. More generally, the Bayesian approach adopted here is applicable to any region and eventually would allow one to evaluate the spatial relationship between various inferred distributions of fault behavior (e.g., seismic rupture, postseismic creep, and apparent interseismic coupling) in a quantifiable manner. We apply this methodology to evaluate the state of apparent inter-seismic coupling in the Chilean-Peruvian subduction margin (12 S - 25 S). As observational constraints, we use previously published horizontal velocities from campaign GPS [Kendrick et al., 2001, 2006] as well as 3 component velocities from a recently established continuous GPS network in the region (CAnTO). We compare results from both joint and independent use of these data sets. We obtain patch like features for Pc with higher values located above 60 km
Bayesian regression model for seasonal forecast of precipitation over Korea
NASA Astrophysics Data System (ADS)
Jo, Seongil; Lim, Yaeji; Lee, Jaeyong; Kang, Hyun-Suk; Oh, Hee-Seok
2012-08-01
In this paper, we apply three different Bayesian methods to the seasonal forecasting of the precipitation in a region around Korea (32.5°N-42.5°N, 122.5°E-132.5°E). We focus on the precipitation of summer season (June-July-August; JJA) for the period of 1979-2007 using the precipitation produced by the Global Data Assimilation and Prediction System (GDAPS) as predictors. Through cross-validation, we demonstrate improvement for seasonal forecast of precipitation in terms of root mean squared error (RMSE) and linear error in probability space score (LEPS). The proposed methods yield RMSE of 1.09 and LEPS of 0.31 between the predicted and observed precipitations, while the prediction using GDAPS output only produces RMSE of 1.20 and LEPS of 0.33 for CPC Merged Analyzed Precipitation (CMAP) data. For station-measured precipitation data, the RMSE and LEPS of the proposed Bayesian methods are 0.53 and 0.29, while GDAPS output is 0.66 and 0.33, respectively. The methods seem to capture the spatial pattern of the observed precipitation. The Bayesian paradigm incorporates the model uncertainty as an integral part of modeling in a natural way. We provide a probabilistic forecast integrating model uncertainty.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.
Some comments on misspecification of priors in Bayesian modelling of measurement error problems.
Richardson, S; Leblond, L
In this paper we discuss some aspects of misspecification of prior distributions in the context of Bayesian modelling of measurement error problems. A Bayesian approach to the treatment of common measurement error situations encountered in epidemiology has been recently proposed. Its implementation involves, first, the structural specification, through conditional independence relationships, of three submodels-a measurement model, an exposure model and a disease model- and secondly, the choice of functional forms for the distributions involved in the submodels. We present some results indicating how the estimation of the regression parameters of interest, which is carried out using Gibbs sampling, can be influenced by a misspecification of the parametric shape of the prior distribution of exposure. PMID:9004392
Dissecting Magnetar Variability with Bayesian Hierarchical Models
NASA Astrophysics Data System (ADS)
Huppenkothen, Daniela; Brewer, Brendon J.; Hogg, David W.; Murray, Iain; Frean, Marcus; Elenbaas, Chris; Watts, Anna L.; Levin, Yuri; van der Horst, Alexander J.; Kouveliotou, Chryssa
2015-09-01
Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.
DISSECTING MAGNETAR VARIABILITY WITH BAYESIAN HIERARCHICAL MODELS
Huppenkothen, Daniela; Elenbaas, Chris; Watts, Anna L.; Horst, Alexander J. van der; Brewer, Brendon J.; Hogg, David W.; Murray, Iain; Frean, Marcus; Levin, Yuri; Kouveliotou, Chryssa
2015-09-01
Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.
Bayesian model selection of template forward models for EEG source reconstruction.
Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan
2014-06-01
Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction.
Utilizing Gaussian Markov random field properties of Bayesian animal models.
Steinsland, Ingelin; Jensen, Henrik
2010-09-01
In this article, we demonstrate how Gaussian Markov random field properties give large computational benefits and new opportunities for the Bayesian animal model. We make inference by computing the posteriors for important quantitative genetic variables. For the single-trait animal model, a nonsampling-based approximation is presented. For the multitrait model, we set up a robust and fast Markov chain Monte Carlo algorithm. The proposed methodology was used to analyze quantitative genetic properties of morphological traits of a wild house sparrow population. Results for single- and multitrait models were compared.
NASA Astrophysics Data System (ADS)
Wong, K. K.; Marouf, E. A.
2004-12-01
ring features. We also use the Bayesian approach to combine in a single step inversion of (simulated) extinction and diffraction lobe observations to recover the particle size distribution over the centimeter to several meters size range without assuming an explicit model. Only the first-order scattering approximation has been considered in our investigation so far, an idealization to be removed in future work.
A Bayesian hierarchical approach for combining case-control and prospective studies.
Müller, P; Parmigiani, G; Schildkraut, J; Tardella, L
1999-09-01
Motivated by the absolute risk predictions required in medical decision making and patient counseling, we propose an approach for the combined analysis of case-control and prospective studies of disease risk factors. The approach is hierarchical to account for parameter heterogeneity among studies and among sampling units of the same study. It is based on modeling the retrospective distribution of the covariates given the disease outcome, a strategy that greatly simplifies both the combination of prospective and retrospective studies and the computation of Bayesian predictions in the hierarchical case-control context. Retrospective modeling differentiates our approach from most current strategies for inference on risk factors, which are based on the assumption of a specific prospective model. To ensure modeling flexibility, we propose using a mixture model for the retrospective distributions of the covariates. This leads to a general nonlinear regression family for the implied prospective likelihood. After introducing and motivating our proposal, we present simple results that highlight its relationship with existing approaches, develop Markov chain Monte Carlo methods for inference and prediction, and present an illustration using ovarian cancer data. PMID:11315018
A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.
Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N
2015-01-01
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366
A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning
Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.
2015-01-01
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366
A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.
Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N
2015-01-01
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.
Bayesian analysis of binary prediction tree models for retrospectively sampled outcomes.
Pittman, Jennifer; Huang, Erich; Nevins, Joseph; Wang, Quanli; West, Mike
2004-10-01
Classification tree models are flexible analysis tools which have the ability to evaluate interactions among predictors as well as generate predictions for responses of interest. We describe Bayesian analysis of a specific class of tree models in which binary response data arise from a retrospective case-control design. We are also particularly interested in problems with potentially very many candidate predictors. This scenario is common in studies concerning gene expression data, which is a key motivating example context. Innovations here include the introduction of tree models that explicitly address and incorporate the retrospective design, and the use of nonparametric Bayesian models involving Dirichlet process priors on the distributions of predictor variables. The model specification influences the generation of trees through Bayes' factor based tests of association that determine significant binary partitions of nodes during a process of forward generation of trees. We describe this constructive process and discuss questions of generating and combining multiple trees via Bayesian model averaging for prediction. Additional discussion of parameter selection and sensitivity is given in the context of an example which concerns prediction of breast tumour status utilizing high-dimensional gene expression data; the example demonstrates the exploratory/explanatory uses of such models as well as their primary utility in prediction. Shortcomings of the approach and comparison with alternative tree modelling algorithms are also discussed, as are issues of modelling and computational extensions.
A Bayesian Nonparametric Meta-Analysis Model
ERIC Educational Resources Information Center
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
Constraining East Antarctic mass trends using a Bayesian inference approach
NASA Astrophysics Data System (ADS)
Martin-Español, Alba; Bamber, Jonathan L.
2016-04-01
East Antarctica is an order of magnitude larger than its western neighbour and the Greenland ice sheet. It has the greatest potential to contribute to sea level rise of any source, including non-glacial contributors. It is, however, the most challenging ice mass to constrain because of a range of factors including the relative paucity of in-situ observations and the poor signal to noise ratio of Earth Observation data such as satellite altimetry and gravimetry. A recent study using satellite radar and laser altimetry (Zwally et al. 2015) concluded that the East Antarctic Ice Sheet (EAIS) had been accumulating mass at a rate of 136±28 Gt/yr for the period 2003-08. Here, we use a Bayesian hierarchical model, which has been tested on, and applied to, the whole of Antarctica, to investigate the impact of different assumptions regarding the origin of elevation changes of the EAIS. We combined GRACE, satellite laser and radar altimeter data and GPS measurements to solve simultaneously for surface processes (primarily surface mass balance, SMB), ice dynamics and glacio-isostatic adjustment over the period 2003-13. The hierarchical model partitions mass trends between SMB and ice dynamics based on physical principles and measures of statistical likelihood. Without imposing the division between these processes, the model apportions about a third of the mass trend to ice dynamics, +18 Gt/yr, and two thirds, +39 Gt/yr, to SMB. The total mass trend for that period for the EAIS was 57±20 Gt/yr. Over the period 2003-08, we obtain an ice dynamic trend of 12 Gt/yr and a SMB trend of 15 Gt/yr, with a total mass trend of 27 Gt/yr. We then imposed the condition that the surface mass balance is tightly constrained by the regional climate model RACMO2.3 and allowed height changes due to ice dynamics to occur in areas of low surface velocities (<10 m/yr) , such as those in the interior of East Antarctica (a similar condition as used in Zwally 2015). The model must find a solution that
Bayesian Geostatistical Modeling of Leishmaniasis Incidence in Brazil
Karagiannis-Voules, Dimitrios-Alexios; Scholte, Ronaldo G. C.; Guimarães, Luiz H.; Utzinger, Jürg; Vounatsou, Penelope
2013-01-01
Background Leishmaniasis is endemic in 98 countries with an estimated 350 million people at risk and approximately 2 million cases annually. Brazil is one of the most severely affected countries. Methodology We applied Bayesian geostatistical negative binomial models to analyze reported incidence data of cutaneous and visceral leishmaniasis in Brazil covering a 10-year period (2001–2010). Particular emphasis was placed on spatial and temporal patterns. The models were fitted using integrated nested Laplace approximations to perform fast approximate Bayesian inference. Bayesian variable selection was employed to determine the most important climatic, environmental, and socioeconomic predictors of cutaneous and visceral leishmaniasis. Principal Findings For both types of leishmaniasis, precipitation and socioeconomic proxies were identified as important risk factors. The predicted number of cases in 2010 were 30,189 (standard deviation [SD]: 7,676) for cutaneous leishmaniasis and 4,889 (SD: 288) for visceral leishmaniasis. Our risk maps predicted the highest numbers of infected people in the states of Minas Gerais and Pará for visceral and cutaneous leishmaniasis, respectively. Conclusions/Significance Our spatially explicit, high-resolution incidence maps identified priority areas where leishmaniasis control efforts should be targeted with the ultimate goal to reduce disease incidence. PMID:23675545
Model Reduction of a Transient Groundwater-Flow Model for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Boyce, S. E.; Yeh, W. W.
2011-12-01
A Bayesian inverse problem requires many repeated model simulations to characterize an unknown parameter's posterior probability distribution. It is computationally infeasible to solve a Bayesian inverse problem of a discretized groundwater flow model with a high dimension parameter and state space. Model reduction has been shown to reduce the dimension of a groundwater model by several orders of magnitude and is well suited for Bayesian inverse problems. A projection-based model reduction approach is proposed to reduce the parameter and state dimensions of a groundwater model. Previous work has done this by using a greedy algorithm for the selection of parameter vectors that make up a basis and their corresponding steady-state solutions for a state basis. The proposed method extends this idea to include transient models by assembling sequentially though the greedy algorithm the parameter and state projection bases. The method begins with the parameter basis being a single vector that is equal to one or an accepted series of values. A set of state vectors that are solutions to the groundwater model using this parameter vector at appropriate times is called the parameter snapshot set. The appropriate times for the parameter snapshot set are determined by maximizing the set's minimum singular value. This optimization is a similar to those used in experimental design for maximizing information. The two bases are made orthonormal by a QR decomposition and applied to the full groundwater model to form a reduced model. The parameter basis is increased with a new parameter vector that maximizes the error between the full model and the reduced model at a set of observation times. The new parameter vector represents where the reduced model is least accurate in representing the original full model. The corresponding parameter snapshot set's appropriate times are found using a greedy algorithm. This sequentially chooses times that have maximum error between the full and
A Bayesian Approach for Instrumental Variable Analysis with Censored Time-to-Event Outcome
Li, Gang; Lu, Xuyang
2014-01-01
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time-to-event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time-to-event outcome by using a two-stage linear model. A Markov Chain Monte Carlo sampling method is developed for parameter estimation for both normal and non-normal linear models with elliptically contoured error distributions. Performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared to the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. PMID:25393617
NASA Astrophysics Data System (ADS)
Stockton, T.; Black, P.; Tauxe, J.; Catlett, K.
2004-12-01
Bayesian decision analysis provides a unified framework for coherent decision-making. Two key components of Bayesian decision analysis are probability distributions and utility functions. Calculating posterior distributions and performing decision analysis can be computationally challenging, especially for complex environmental models. In addition, probability distributions and utility functions for environmental models must be specified through expert elicitation, stakeholder consensus, or data collection, all of which have their own set of technical and political challenges. Nevertheless, a grand appeal of the Bayesian approach for environmental decision- making is the explicit treatment of uncertainty, including expert judgment. The impact of expert judgment on the environmental decision process, though integral, goes largely unassessed. Regulations and orders of the Environmental Protection Agency, Department Of Energy, and Nuclear Regulatory Agency orders require assessing the impact on human health of radioactive waste contamination over periods of up to ten thousand years. Towards this end complex environmental simulation models are used to assess "risk" to human and ecological health from migration of radioactive waste. As the computational burden of environmental modeling is continually reduced probabilistic process modeling using Monte Carlo simulation is becoming routinely used to propagate uncertainty from model inputs through model predictions. The utility of a Bayesian approach to environmental decision-making is discussed within the context of a buried radioactive waste example. This example highlights the desirability and difficulties of merging the cost of monitoring, the cost of the decision analysis, the cost and viability of clean up, and the probability of human health impacts within a rigorous decision framework.
Two levels of Bayesian model averaging for optimal control of stochastic systems
NASA Astrophysics Data System (ADS)
Darwen, Paul J.
2013-02-01
Bayesian model averaging provides the best possible estimate of a model, given the data. This article uses that approach twice: once to get a distribution of plausible models of the world, and again to find a distribution of plausible control functions. The resulting ensemble gives control instructions different from simply taking the single best-fitting model and using it to find a single lowest-error control function for that single model. The only drawback is, of course, the need for more computer time: this article demonstrates that the required computer time is feasible. The test problem here is from flood control and risk management.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
Bayesian Thurstonian models for ranking data using JAGS.
Johnson, Timothy R; Kuhn, Kristine M
2013-09-01
A Thurstonian model for ranking data assumes that observed rankings are consistent with those of a set of underlying continuous variables. This model is appealing since it renders ranking data amenable to familiar models for continuous response variables-namely, linear regression models. To date, however, the use of Thurstonian models for ranking data has been very rare in practice. One reason for this may be that inferences based on these models require specialized technical methods. These methods have been developed to address computational challenges involved in these models but are not easy to implement without considerable technical expertise and are not widely available in software packages. To address this limitation, we show that Bayesian Thurstonian models for ranking data can be very easily implemented with the JAGS software package. We provide JAGS model files for Thurstonian ranking models for general use, discuss their implementation, and illustrate their use in analyses. PMID:23539504
Models and simulation of 3D neuronal dendritic trees using Bayesian networks.
López-Cruz, Pedro L; Bielza, Concha; Larrañaga, Pedro; Benavides-Piccione, Ruth; DeFelipe, Javier
2011-12-01
Neuron morphology is crucial for neuronal connectivity and brain information processing. Computational models are important tools for studying dendritic morphology and its role in brain function. We applied a class of probabilistic graphical models called Bayesian networks to generate virtual dendrites from layer III pyramidal neurons from three different regions of the neocortex of the mouse. A set of 41 morphological variables were measured from the 3D reconstructions of real dendrites and their probability distributions used in a machine learning algorithm to induce the model from the data. A simulation algorithm is also proposed to obtain new dendrites by sampling values from Bayesian networks. The main advantage of this approach is that it takes into account and automatically locates the relationships between variables in the data instead of using predefined dependencies. Therefore, the methodology can be applied to any neuronal class while at the same time exploiting class-specific properties. Also, a Bayesian network was defined for each part of the dendrite, allowing the relationships to change in the different sections and to model heterogeneous developmental factors or spatial influences. Several univariate statistical tests and a novel multivariate test based on Kullback-Leibler divergence estimation confirmed that virtual dendrites were similar to real ones. The analyses of the models showed relationships that conform to current neuroanatomical knowledge and support model correctness. At the same time, studying the relationships in the models can help to identify new interactions between variables related to dendritic morphology.
Models and simulation of 3D neuronal dendritic trees using Bayesian networks.
López-Cruz, Pedro L; Bielza, Concha; Larrañaga, Pedro; Benavides-Piccione, Ruth; DeFelipe, Javier
2011-12-01
Neuron morphology is crucial for neuronal connectivity and brain information processing. Computational models are important tools for studying dendritic morphology and its role in brain function. We applied a class of probabilistic graphical models called Bayesian networks to generate virtual dendrites from layer III pyramidal neurons from three different regions of the neocortex of the mouse. A set of 41 morphological variables were measured from the 3D reconstructions of real dendrites and their probability distributions used in a machine learning algorithm to induce the model from the data. A simulation algorithm is also proposed to obtain new dendrites by sampling values from Bayesian networks. The main advantage of this approach is that it takes into account and automatically locates the relationships between variables in the data instead of using predefined dependencies. Therefore, the methodology can be applied to any neuronal class while at the same time exploiting class-specific properties. Also, a Bayesian network was defined for each part of the dendrite, allowing the relationships to change in the different sections and to model heterogeneous developmental factors or spatial influences. Several univariate statistical tests and a novel multivariate test based on Kullback-Leibler divergence estimation confirmed that virtual dendrites were similar to real ones. The analyses of the models showed relationships that conform to current neuroanatomical knowledge and support model correctness. At the same time, studying the relationships in the models can help to identify new interactions between variables related to dendritic morphology. PMID:21305364
Calibrating Subjective Probabilities Using Hierarchical Bayesian Models
NASA Astrophysics Data System (ADS)
Merkle, Edgar C.
A body of psychological research has examined the correspondence between a judge's subjective probability of an event's outcome and the event's actual outcome. The research generally shows that subjective probabilities are noisy and do not match the "true" probabilities. However, subjective probabilities are still useful for forecasting purposes if they bear some relationship to true probabilities. The purpose of the current research is to exploit relationships between subjective probabilities and outcomes to create improved, model-based probabilities for forecasting. Once the model has been trained in situations where the outcome is known, it can then be used in forecasting situations where the outcome is unknown. These concepts are demonstrated using experimental psychology data, and potential applications are discussed.
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non
DPpackage: Bayesian Semi- and Nonparametric Modeling in R
Jara, Alejandro; Hanson, Timothy E.; Quintana, Fernando A.; Müller, Peter; Rosner, Gary L.
2011-01-01
Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian non- and semi-parametric models in R, DPpackage. Currently DPpackage includes models for marginal and conditional density estimation, ROC curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison, and for eliciting the precision parameter of the Dirichlet process prior. To maximize computational efficiency, the actual sampling for each model is carried out using compiled FORTRAN. PMID:21796263
DPpackage: Bayesian Non- and Semi-parametric Modelling in R.
Jara, Alejandro; Hanson, Timothy E; Quintana, Fernando A; Müller, Peter; Rosner, Gary L
2011-04-01
Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian non- and semi-parametric models in R, DPpackage. Currently DPpackage includes models for marginal and conditional density estimation, ROC curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison, and for eliciting the precision parameter of the Dirichlet process prior. To maximize computational efficiency, the actual sampling for each model is carried out using compiled FORTRAN. PMID:21796263
Fully Bayesian mixture model for differential gene expression: simulations and model checks.
Lewin, Alex; Bochkina, Natalia; Richardson, Sylvia
2007-01-01
We present a Bayesian hierarchical model for detecting differentially expressed genes using a mixture prior on the parameters representing differential effects. We formulate an easily interpretable 3-component mixture to classify genes as over-expressed, under-expressed and non-differentially expressed, and model gene variances as exchangeable to allow for variability between genes. We show how the proportion of differentially expressed genes, and the mixture parameters, can be estimated in a fully Bayesian way, extending previous approaches where this proportion was fixed and empirically estimated. Good estimates of the false discovery rates are also obtained. Different parametric families for the mixture components can lead to quite different classifications of genes for a given data set. Using Affymetrix data from a knock out and wildtype mice experiment, we show how predictive model checks can be used to guide the choice between possible mixture priors. These checks show that extending the mixture model to allow extra variability around zero instead of the usual point mass null fits the data better. A software package for R is available.
Majumdar, Anandamayee; Gries, Corinna
2010-01-01
Lately, bivariate zero-inflated (BZI) regression models have been used in many instances in the medical sciences to model excess zeros. Examples include the BZI Poisson (BZIP), BZI negative binomial (BZINB) models, etc. Such formulations vary in the basic modeling aspect and use the EM algorithm (Dempster, Laird and Rubin, 1977) for parameter estimation. A different modeling formulation in the Bayesian context is given by Dagne (2004). We extend the modeling to a more general setting for multivariate ZIP models for count data with excess zeros as proposed by Li, Lu, Park, Kim, Brinkley and Peterson (1999), focusing on a particular bivariate regression formulation. For the basic formulation in the case of bivariate data, we assume that Xi are (latent) independent Poisson random variables with parameters λ i, i = 0, 1, 2. A bi-variate count vector (Y1, Y2) response follows a mixture of four distributions; p0 stands for the mixing probability of a point mass distribution at (0, 0); p1, the mixing probability that Y2 = 0, while Y1 = X0 + X1; p2, the mixing probability that Y1 = 0 while Y2 = X0 + X2; and finally (1 - p0 - p1 - p2), the mixing probability that Yi = Xi + X0, i = 1, 2. The choice of the parameters {pi, λ i, i = 0, 1, 2} ensures that the marginal distributions of Yi are zero inflated Poisson (λ 0 + λ i). All the parameters thus introduced are allowed to depend on co-variates through canonical link generalized linear models (McCullagh and Nelder, 1989). This flexibility allows for a range of real-life applications, especially in the medical and biological fields, where the counts are bivariate in nature (with strong association between the processes) and where there are excess of zeros in one or both processes. Our contribution in this paper is to employ a fully Bayesian approach consolidating the work of Dagne (2004) and Li et al. (1999) generalizing the modeling and sampling-based methods described by Ghosh, Mukhopadhyay and Lu (2006) to estimate the
ERIC Educational Resources Information Center
Kelava, Augustin; Nagengast, Benjamin
2012-01-01
Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent…
Bayesian scrutiny of simple rainfall-runoff models used in forest water management
NASA Astrophysics Data System (ADS)
Greenwood, Ashley J. B.; Schoups, Gerrit; Campbell, Edward P.; Lane, Patrick N. J.
2014-05-01
Simple rainfall-runoff models used in the assessment of land-use change and to support forest water management are subjected to a selection process which scrutinises their veracity and integrity. Veracity, the ability of a model to provide meaningful information is assessed using performance criteria, incorporating: a popular mean square error (MSE) approach; empirical distribution functions and information criteria. Integrity, a model's plausibility reflected in its ability to extract information from data, is assessed using a Bayesian approach. A delayed rejection, adaptive Metropolis algorithm is used with a generalised likelihood to calibrate the models. Predictive uncertainty is assessed using a split sample procedure which uses high runoff data for calibration and drier data for validation. A simple multiplicative latent variable is used to accommodate input uncertainty in rainfall data, enabling a distinction to be made between uncertainty associated with data, parameters and the models themselves. The study demonstrates: the focus provided by setting model evaluation in a philosophical context; the benefits of using a more meaningful range of performance criteria than MSE-based approaches and the insights into integrity provided by Bayesian analyses. A hyperbolic tangent model is selected as the best of five candidates for its superior veracity and integrity under Australian conditions. Models with extensive application in South Africa, Australia and USA are rejected. Challenges to applying this approach in water management are identified in the pragmatic nature of the sector, its capacity constraints and a tendency of researchers to place confidence in accepted methods at the expense of rigour.
Bayesian partial linear model for skewed longitudinal data.
Tang, Yuanyuan; Sinha, Debajyoti; Pati, Debdeep; Lipsitz, Stuart; Lipshultz, Steven
2015-07-01
Unlike majority of current statistical models and methods focusing on mean response for highly skewed longitudinal data, we present a novel model for such data accommodating a partially linear median regression function, a skewed error distribution and within subject association structures. We provide theoretical justifications for our methods including asymptotic properties of the posterior and associated semiparametric Bayesian estimators. We also provide simulation studies to investigate the finite sample properties of our methods. Several advantages of our method compared with existing methods are demonstrated via analysis of a cardiotoxicity study of children of HIV-infected mothers.
Bayesian hierarchical modeling for detecting safety signals in clinical trials.
Xia, H Amy; Ma, Haijun; Carlin, Bradley P
2011-09-01
Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.
Zhang, Hua; Huo, Mingdong; Chao, Jianqian; Liu, Pei
2016-01-01
Background Hepatitis B virus (HBV) infection is a major problem for public health; timely antiviral treatment can significantly prevent the progression of liver damage from HBV by slowing down or stopping the virus from reproducing. In the study we applied Bayesian approach to cost-effectiveness analysis, using Markov Chain Monte Carlo (MCMC) simulation methods for the relevant evidence input into the model to evaluate cost-effectiveness of entecavir (ETV) and lamivudine (LVD) therapy for chronic hepatitis B (CHB) in Jiangsu, China, thus providing information to the public health system in the CHB therapy. Methods Eight-stage Markov model was developed, a hypothetical cohort of 35-year-old HBeAg-positive patients with CHB was entered into the model. Treatment regimens were LVD100mg daily and ETV 0.5 mg daily. The transition parameters were derived either from systematic reviews of the literature or from previous economic studies. The outcome measures were life-years, quality-adjusted lifeyears (QALYs), and expected costs associated with the treatments and disease progression. For the Bayesian models all the analysis was implemented by using WinBUGS version 1.4. Results Expected cost, life expectancy, QALYs decreased with age. Cost-effectiveness increased with age. Expected cost of ETV was less than LVD, while life expectancy and QALYs were higher than that of LVD, ETV strategy was more cost-effective. Costs and benefits of the Monte Carlo simulation were very close to the results of exact form among the group, but standard deviation of each group indicated there was a big difference between individual patients. Conclusions Compared with lamivudine, entecavir is the more cost-effective option. CHB patients should accept antiviral treatment as soon as possible as the lower age the more cost-effective. Monte Carlo simulation obtained costs and effectiveness distribution, indicate our Markov model is of good robustness. PMID:27574976
Chain Graph Models to Elicit the Structure of a Bayesian Network
Stefanini, Federico M.
2014-01-01
Bayesian networks are possibly the most successful graphical models to build decision support systems. Building the structure of large networks is still a challenging task, but Bayesian methods are particularly suited to exploit experts' degree of belief in a quantitative way while learning the network structure from data. In this paper details are provided about how to build a prior distribution on the space of network structures by eliciting a chain graph model on structural reference features. Several structural features expected to be often useful during the elicitation are described. The statistical background needed to effectively use this approach is summarized, and some potential pitfalls are illustrated. Finally, a few seminal contributions from the literature are reformulated in terms of structural features. PMID:24688427
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
NASA Astrophysics Data System (ADS)
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
O'Hare, A; Orton, R J; Bessell, P R; Kao, R R
2014-05-22
Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test s