Science.gov

Sample records for bayesian statistical control

  1. [Bayesian statistic: an approach fitted to clinic].

    PubMed

    Meyer, N; Vinzio, S; Goichot, B

    2009-03-01

    Bayesian statistic has known a growing success though quite limited. This is surprising since Bayes' theorem on which this paradigm relies is frequently used by the clinicians. There is a direct link between the routine diagnostic test and the Bayesian statistic. This link is the Bayes' theorem which allows one to compute positive and negative predictive values of a test. The principle of this theorem is extended to simple statistical situations as an introduction to Bayesian statistic. The conceptual simplicity of Bayesian statistic should make for a greater acceptance in the biomedical world.

  2. Bayesian Statistics: A Place in Educational Research?

    ERIC Educational Resources Information Center

    Diamond, James

    The use of Bayesian statistics as the basis of classical analysis of data is described. Bayesian analysis is a set of procedures for changing opinions about a given phenomenon based upon rational observation of a set of data. The Bayesian arrives at a set of prior beliefs regarding some states of nature; he observes data in a study and then…

  3. Bayesian Statistics for Biological Data: Pedigree Analysis

    ERIC Educational Resources Information Center

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  4. Philosophy and the practice of Bayesian statistics

    PubMed Central

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2015-01-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. PMID:22364575

  5. Philosophy and the practice of Bayesian statistics.

    PubMed

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2013-02-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

  6. Approximate Bayesian computation with functional statistics.

    PubMed

    Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K

    2013-03-26

    Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.

  7. Bayesian Cosmological inference beyond statistical isotropy

    NASA Astrophysics Data System (ADS)

    Souradeep, Tarun; Das, Santanu; Wandelt, Benjamin

    2016-10-01

    With advent of rich data sets, computationally challenge of inference in cosmology has relied on stochastic sampling method. First, I review the widely used MCMC approach used to infer cosmological parameters and present a adaptive improved implementation SCoPE developed by our group. Next, I present a general method for Bayesian inference of the underlying covariance structure of random fields on a sphere. We employ the Bipolar Spherical Harmonic (BipoSH) representation of general covariance structure on the sphere. We illustrate the efficacy of the method with a principled approach to assess violation of statistical isotropy (SI) in the sky maps of Cosmic Microwave Background (CMB) fluctuations. The general, principled, approach to a Bayesian inference of the covariance structure in a random field on a sphere presented here has huge potential for application to other many aspects of cosmology and astronomy, as well as, more distant areas of research like geosciences and climate modelling.

  8. Bayesian statistical studies of the Ramachandran distribution.

    PubMed

    Pertsemlidis, Alexander; Zelinka, Jan; Fondon, John W; Henderson, R Keith; Otwinowski, Zbyszek

    2005-01-01

    We describe a method for the generation of knowledge-based potentials and apply it to the observed torsional angles of known protein structures. The potential is derived using Bayesian reasoning, and is useful as a prior for further such reasoning in the presence of additional data. The potential takes the form of a probability density function, which is described by a small number of coefficients with the number of necessary coefficients determined by tests based on statistical significance and entropy. We demonstrate the methods in deriving one such potential corresponding to two dimensions, the Ramachandran plot. In contrast to traditional histogram-based methods, the function is continuous and differentiable. These properties allow us to use the function as a force term in the energy minimization of appropriately described structures. The method can easily be extended to other observable angles and higher dimensions, or to include sequence dependence and should find applications in structure determination and validation.

  9. Bayesian statistics in medicine: a 25 year review.

    PubMed

    Ashby, Deborah

    2006-11-15

    This review examines the state of Bayesian thinking as Statistics in Medicine was launched in 1982, reflecting particularly on its applicability and uses in medical research. It then looks at each subsequent five-year epoch, with a focus on papers appearing in Statistics in Medicine, putting these in the context of major developments in Bayesian thinking and computation with reference to important books, landmark meetings and seminal papers. It charts the growth of Bayesian statistics as it is applied to medicine and makes predictions for the future. From sparse beginnings, where Bayesian statistics was barely mentioned, Bayesian statistics has now permeated all the major areas of medical statistics, including clinical trials, epidemiology, meta-analyses and evidence synthesis, spatial modelling, longitudinal modelling, survival modelling, molecular genetics and decision-making in respect of new technologies.

  10. Melvin R. Novick: His Work in Bayesian Statistics.

    ERIC Educational Resources Information Center

    Lindley, D. V.

    1987-01-01

    Discusses Melvin R. Novick's work in the area of Bayesian statistics. This area of statistics was seen as a powerful scientific tool that allows educational researchers to have a better understanding of their data. (RB)

  11. Bayesian statistics in medical devices: innovation sparked by the FDA.

    PubMed

    Campbell, Gregory

    2011-09-01

    Bayesian statistical methodology has been used for more than 10 years in medical device premarket submissions to the U.S. Food and Drug Administration (FDA). A complete list of the publicly available information associated with these FDA applications is presented. In addition to the increasing number of Bayesian methodological papers in the statistical journals, a number of successful Bayesian clinical trials in the biomedical journals have been recently reported. Some challenges that require more methodological development are discussed. The promise of using Bayesian methods for incorporation of prior information as well as for conducting adaptive trials is great.

  12. Teaching Bayesian Statistics to Undergraduate Students through Debates

    ERIC Educational Resources Information Center

    Stewart, Sepideh; Stewart, Wayne

    2014-01-01

    This paper describes a lecturer's approach to teaching Bayesian statistics to students who were only exposed to the classical paradigm. The study shows how the lecturer extended himself by making use of ventriloquist dolls to grab hold of students' attention and embed important ideas in revealing the differences between the Bayesian and classical…

  13. Travel time inversion by Bayesian Inferential Statistics

    NASA Astrophysics Data System (ADS)

    Mauerberger, Stefan; Holschneider, Matthias

    2015-04-01

    We are presenting a fully Bayesian approach in inferential statistics determining the posterior probability distribution for non- directly observable quantities (e. g. Earth model parameters). In contrast to deterministic methods established in Geophysics we are following a probabilistic approach focusing on exploring variabilities and uncertainties of model parameters. The aim of that work will be to quantify how well a parameter is determined by a priory knowledge (e. g. existing earth models, rock properties) completed by data obtained from multiple sources (e. g. seismograms, well logs, outcrops). Therefore we are considering the system in question as a Gaussian random process. As a consequence, the randomness in that system is completely determined by its mean and covariance function. Our prior knowledge of that conceptional random process is represented by its a priori mean. The associated covariance function - which quantifies the statistical correlation at two different points - forms the basis of rules for interpolating values at points for which there are no observations. Measurements are assumed to be linear (or linearized) observational functionals of the system. The desired quantities we want to predict are thought to be linear functionals, too. Due to linearity, observables and predictions are Gaussian distributed as well depending on the prior mean and covariance function. The presented approach calculates the prediction's conditional probability distribution posterior to a set of measurements. That conditional probability combines observations and prior knowledge yielding the posterior distribution, an updated distribution for the predictions. As proof of concept, the estimation of propagation velocities in a simple travel-time model is presented. The two observational functionals considered are travel-times and point measures of the velocity model. The system in question is the underlying velocity model itself, assumed as Gaussian random process

  14. Bayesian Analysis of Order-Statistics Models for Ranking Data.

    ERIC Educational Resources Information Center

    Yu, Philip L. H.

    2000-01-01

    Studied the order-statistics models, extending the usual normal order-statistics model into one in which the underlying random variables followed a multivariate normal distribution. Used a Bayesian approach and the Gibbs sampling technique. Applied the proposed method to analyze presidential election data from the American Psychological…

  15. Bayesian Statistical Inference for Coefficient Alpha. ACT Research Report Series.

    ERIC Educational Resources Information Center

    Li, Jun Corser; Woodruff, David J.

    Coefficient alpha is a simple and very useful index of test reliability that is widely used in educational and psychological measurement. Classical statistical inference for coefficient alpha is well developed. This paper presents two methods for Bayesian statistical inference for a single sample alpha coefficient. An approximate analytic method…

  16. A BAYESIAN STATISTICAL APPROACH FOR THE EVALUATION OF CMAQ

    EPA Science Inventory

    Bayesian statistical methods are used to evaluate Community Multiscale Air Quality (CMAQ) model simulations of sulfate aerosol over a section of the eastern US for 4-week periods in summer and winter 2001. The observed data come from two U.S. Environmental Protection Agency data ...

  17. Some Bayesian statistical techniques useful in estimating frequency and density

    USGS Publications Warehouse

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  18. Bayesian Case Influence Measures for Statistical Models with Missing Data

    PubMed Central

    Zhu, Hongtu; Ibrahim, Joseph G.; Cho, Hyunsoon; Tang, Niansheng

    2011-01-01

    We examine three Bayesian case influence measures including the φ-divergence, Cook's posterior mode distance and Cook's posterior mean distance for identifying a set of influential observations for a variety of statistical models with missing data including models for longitudinal data and latent variable models in the absence/presence of missing data. Since it can be computationally prohibitive to compute these Bayesian case influence measures in models with missing data, we derive simple first-order approximations to the three Bayesian case influence measures by using the Laplace approximation formula and examine the applications of these approximations to the identification of influential sets. All of the computations for the first-order approximations can be easily done using Markov chain Monte Carlo samples from the posterior distribution based on the full data. Simulated data and an AIDS dataset are analyzed to illustrate the methodology. PMID:23399928

  19. Human Motion Retrieval Based on Statistical Learning and Bayesian Fusion

    PubMed Central

    Xiao, Qinkun; Song, Ren

    2016-01-01

    A novel motion retrieval approach based on statistical learning and Bayesian fusion is presented. The approach includes two primary stages. (1) In the learning stage, fuzzy clustering is utilized firstly to get the representative frames of motions, and the gesture features of the motions are extracted to build a motion feature database. Based on the motion feature database and statistical learning, the probability distribution function of different motion classes is obtained. (2) In the motion retrieval stage, the query motion feature is extracted firstly according to stage (1). Similarity measurements are then conducted employing a novel method that combines category-based motion similarity distances with similarity distances based on canonical correlation analysis. The two motion distances are fused using Bayesian estimation, and the retrieval results are ranked according to the fused values. The effectiveness of the proposed method is verified experimentally. PMID:27732673

  20. Bayesian modeling of flexible cognitive control

    PubMed Central

    Jiang, Jiefeng; Heller, Katherine; Egner, Tobias

    2014-01-01

    “Cognitive control” describes endogenous guidance of behavior in situations where routine stimulus-response associations are suboptimal for achieving a desired goal. The computational and neural mechanisms underlying this capacity remain poorly understood. We examine recent advances stemming from the application of a Bayesian learner perspective that provides optimal prediction for control processes. In reviewing the application of Bayesian models to cognitive control, we note that an important limitation in current models is a lack of a plausible mechanism for the flexible adjustment of control over conflict levels changing at varying temporal scales. We then show that flexible cognitive control can be achieved by a Bayesian model with a volatility-driven learning mechanism that modulates dynamically the relative dependence on recent and remote experiences in its prediction of future control demand. We conclude that the emergent Bayesian perspective on computational mechanisms of cognitive control holds considerable promise, especially if future studies can identify neural substrates of the variables encoded by these models, and determine the nature (Bayesian or otherwise) of their neural implementation. PMID:24929218

  1. Spectral Analysis of B Stars: An Application of Bayesian Statistics

    NASA Astrophysics Data System (ADS)

    Mugnes, J.-M.; Robert, C.

    2012-12-01

    To better understand the processes involved in stellar physics, it is necessary to obtain accurate stellar parameters (effective temperature, surface gravity, abundances…). Spectral analysis is a powerful tool for investigating stars, but it is also vital to reduce uncertainties at a decent computational cost. Here we present a spectral analysis method based on a combination of Bayesian statistics and grids of synthetic spectra obtained with TLUSTY. This method simultaneously constrains the stellar parameters by using all the lines accessible in observed spectra and thus greatly reduces uncertainties and improves the overall spectrum fitting. Preliminary results are shown using spectra from the Observatoire du Mont-Mégantic.

  2. Bayesian statistics and information fusion for GPS-denied navigation

    NASA Astrophysics Data System (ADS)

    Copp, Brian Lee

    It is well known that satellite navigation systems are vulnerable to disruption due to jamming, spoofing, or obstruction of the signal. The desire for robust navigation of aircraft in GPS-denied environments has motivated the development of feature-aided navigation systems, in which measurements of environmental features are used to complement the dead reckoning solution produced by an inertial navigation system. Examples of environmental features which can be exploited for navigation include star positions, terrain elevation, terrestrial wireless signals, and features extracted from photographic data. Feature-aided navigation represents a particularly challenging estimation problem because the measurements are often strongly nonlinear, and the quality of the navigation solution is limited by the knowledge of nuisance parameters which may be difficult to model accurately. As a result, integration approaches based on the Kalman filter and its variants may fail to give adequate performance. This project develops a framework for the integration of feature-aided navigation techniques using Bayesian statistics. In this approach, the probability density function for aircraft horizontal position (latitude and longitude) is approximated by a two-dimensional point mass function defined on a rectangular grid. Nuisance parameters are estimated using a hypothesis based approach (Multiple Model Adaptive Estimation) which continuously maintains an accurate probability density even in the presence of strong nonlinearities. The effectiveness of the proposed approach is illustrated by the simulated use of terrain referenced navigation and wireless time-of-arrival positioning to estimate a reference aircraft trajectory. Monte Carlo simulations have shown that accurate position estimates can be obtained in terrain referenced navigation even with a strongly nonlinear altitude bias. The integration of terrain referenced and wireless time-of-arrival measurements is described along with

  3. Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.

    PubMed

    Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.

  4. Bayesian Tracking of Emerging Epidemics Using Ensemble Optimal Statistical Interpolation

    PubMed Central

    Cobb, Loren; Krishnamurthy, Ashok; Mandel, Jan; Beezley, Jonathan D.

    2014-01-01

    We present a preliminary test of the Ensemble Optimal Statistical Interpolation (EnOSI) method for the statistical tracking of an emerging epidemic, with a comparison to its popular relative for Bayesian data assimilation, the Ensemble Kalman Filter (EnKF). The spatial data for this test was generated by a spatial susceptible-infectious-removed (S-I-R) epidemic model of an airborne infectious disease. Both tracking methods in this test employed Poisson rather than Gaussian noise, so as to handle epidemic data more accurately. The EnOSI and EnKF tracking methods worked well on the main body of the simulated spatial epidemic, but the EnOSI was able to detect and track a distant secondary focus of infection that the EnKF missed entirely. PMID:25113590

  5. Bayesian tracking of emerging epidemics using ensemble optimal statistical interpolation.

    PubMed

    Cobb, Loren; Krishnamurthy, Ashok; Mandel, Jan; Beezley, Jonathan D

    2014-07-01

    We present a preliminary test of the Ensemble Optimal Statistical Interpolation (EnOSI) method for the statistical tracking of an emerging epidemic, with a comparison to its popular relative for Bayesian data assimilation, the Ensemble Kalman Filter (EnKF). The spatial data for this test was generated by a spatial susceptible-infectious-removed (S-I-R) epidemic model of an airborne infectious disease. Both tracking methods in this test employed Poisson rather than Gaussian noise, so as to handle epidemic data more accurately. The EnOSI and EnKF tracking methods worked well on the main body of the simulated spatial epidemic, but the EnOSI was able to detect and track a distant secondary focus of infection that the EnKF missed entirely.

  6. Defining statistical perceptions with an empirical Bayesian approach

    NASA Astrophysics Data System (ADS)

    Tajima, Satohiro

    2013-04-01

    Extracting statistical structures (including textures or contrasts) from a natural stimulus is a central challenge in both biological and engineering contexts. This study interprets the process of statistical recognition in terms of hyperparameter estimations and free-energy minimization procedures with an empirical Bayesian approach. This mathematical interpretation resulted in a framework for relating physiological insights in animal sensory systems to the functional properties of recognizing stimulus statistics. We applied the present theoretical framework to two typical models of natural images that are encoded by a population of simulated retinal neurons, and demonstrated that the resulting cognitive performances could be quantified with the Fisher information measure. The current enterprise yielded predictions about the properties of human texture perception, suggesting that the perceptual resolution of image statistics depends on visual field angles, internal noise, and neuronal information processing pathways, such as the magnocellular, parvocellular, and koniocellular systems. Furthermore, the two conceptually similar natural-image models were found to yield qualitatively different predictions, striking a note of warning against confusing the two models when describing a natural image.

  7. Teaching Bayesian Statistics in a Health Research Methodology Program

    ERIC Educational Resources Information Center

    Pullenayegum, Eleanor M.; Thabane, Lehana

    2009-01-01

    Despite the appeal of Bayesian methods in health research, they are not widely used. This is partly due to a lack of courses in Bayesian methods at an appropriate level for non-statisticians in health research. Teaching such a course can be challenging because most statisticians have been taught Bayesian methods using a mathematical approach, and…

  8. Bayesian inference on the sphere beyond statistical isotropy

    SciTech Connect

    Das, Santanu; Souradeep, Tarun; Wandelt, Benjamin D. E-mail: wandelt@iap.fr

    2015-10-01

    We present a general method for Bayesian inference of the underlying covariance structure of random fields on a sphere. We employ the Bipolar Spherical Harmonic (BipoSH) representation of general covariance structure on the sphere. We illustrate the efficacy of the method as a principled approach to assess violation of statistical isotropy (SI) in the sky maps of Cosmic Microwave Background (CMB) fluctuations. SI violation in observed CMB maps arise due to known physical effects such as Doppler boost and weak lensing; yet unknown theoretical possibilities like cosmic topology and subtle violations of the cosmological principle, as well as, expected observational artefacts of scanning the sky with a non-circular beam, masking, foreground residuals, anisotropic noise, etc. We explicitly demonstrate the recovery of the input SI violation signals with their full statistics in simulated CMB maps. Our formalism easily adapts to exploring parametric physical models with non-SI covariance, as we illustrate for the inference of the parameters of a Doppler boosted sky map. Our approach promises to provide a robust quantitative evaluation of the evidence for SI violation related anomalies in the CMB sky by estimating the BipoSH spectra along with their complete posterior.

  9. Online Dectection and Modeling of Safety Boundaries for Aerospace Application Using Bayesian Statistics

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    The behavior of complex aerospace systems is governed by numerous parameters. For safety analysis it is important to understand how the system behaves with respect to these parameter values. In particular, understanding the boundaries between safe and unsafe regions is of major importance. In this paper, we describe a hierarchical Bayesian statistical modeling approach for the online detection and characterization of such boundaries. Our method for classification with active learning uses a particle filter-based model and a boundary-aware metric for best performance. From a library of candidate shapes incorporated with domain expert knowledge, the location and parameters of the boundaries are estimated using advanced Bayesian modeling techniques. The results of our boundary analysis are then provided in a form understandable by the domain expert. We illustrate our approach using a simulation model of a NASA neuro-adaptive flight control system, as well as a system for the detection of separation violations in the terminal airspace.

  10. Bayesian Statistical Inference in Psychology: Comment on Trafimow (2003)

    ERIC Educational Resources Information Center

    Lee, Michael D.; Wagenmakers, Eric-Jan

    2005-01-01

    D. Trafimow presented an analysis of null hypothesis significance testing (NHST) using Bayes's theorem. Among other points, he concluded that NHST is logically invalid, but that logically valid Bayesian analyses are often not possible. The latter conclusion reflects a fundamental misunderstanding of the nature of Bayesian inference. This view…

  11. Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.

    2016-04-01

    We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.

  12. Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation

    NASA Technical Reports Server (NTRS)

    Jefferys, William H.; Berger, James O.

    1992-01-01

    'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.

  13. VizieR Online Data Catalog: Bayesian statistics for massive stars (Mugnes+, 2015)

    NASA Astrophysics Data System (ADS)

    Mugnes, J.-M.; Robert, C.

    2016-06-01

    We use our spectral analysis method based on Bayesian statistics that simultaneously constrains four stellar parameters (effective temperature, surface gravity, projected rotational velocity, and microturbulence velocity) on a sample of B stars. (2 data files).

  14. Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach

    NASA Astrophysics Data System (ADS)

    Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.

    2010-12-01

    Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial

  15. TOWARDS A BAYESIAN PERSPECTIVE ON STATISTICAL DISCLOSURE LIMITATION

    EPA Science Inventory

    National statistical offices and other organizations collect data on individual subjects (person, businesses, organizations), typically while assuring the subject that data pertaining to them will be held confidential. These data provide the raw material for statistical data pro...

  16. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    NASA Astrophysics Data System (ADS)

    Prix, Reinhard; Krishnan, Badri

    2009-10-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  17. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.

  18. Bayesian statistics for the calibration of the LISA Pathfinder experiment

    NASA Astrophysics Data System (ADS)

    Armano, M.; Audley, H.; Auger, G.; Binetruy, P.; Born, M.; Bortoluzzi, D.; Brandt, N.; Bursi, A.; Caleno, M.; Cavalleri, A.; Cesarini, A.; Cruise, M.; Danzmann, K.; Diepholz, I.; Dolesi, R.; Dunbar, N.; Ferraioli, L.; Ferroni, V.; Fitzsimons, E.; Freschi, M.; García Marirrodriga, C.; Gerndt, R.; Gesa, L.; Gibert, F.; Giardini, D.; Giusteri, R.; Grimani, C.; Harrison, I.; Heinzel, G.; Hewitson, M.; Hollington, D.; Hueller, M.; Huesler, J.; Inchauspé, H.; Jennrich, O.; Jetzer, P.; Johlander, B.; Karnesis, N.; Kaune, B.; Korsakova, N.; Killow, C.; Lloro, I.; Maarschalkerweerd, R.; Madden, S.; Mance, D.; Martin, V.; Martin-Porqueras, F.; Mateos, I.; McNamara, P.; Mendes, J.; Mitchell, E.; Moroni, A.; Nofrarias, M.; Paczkowski, S.; Perreur-Lloyd, M.; Pivato, P.; Plagnol, E.; Prat, P.; Ragnit, U.; Ramos-Castro, J.; Reiche, J.; Romera Perez, J. A.; Robertson, D.; Rozemeijer, H.; Russano, G.; Sarra, P.; Schleicher, A.; Slutsky, J.; Sopuerta, C. F.; Sumner, T.; Texier, D.; Thorpe, J.; Trenkel, C.; Tu, H. B.; Vitale, S.; Wanner, G.; Ward, H.; Waschke, S.; Wass, P.; Wealthy, D.; Wen, S.; Weber, W.; Wittchen, A.; Zanoni, C.; Ziegler, T.; Zweifel, P.

    2015-05-01

    The main goal of LISA Pathfinder (LPF) mission is to estimate the acceleration noise models of the overall LISA Technology Package (LTP) experiment on-board. This will be of crucial importance for the future space-based Gravitational-Wave (GW) detectors, like eLISA. Here, we present the Bayesian analysis framework to process the planned system identification experiments designed for that purpose. In particular, we focus on the analysis strategies to predict the accuracy of the parameters that describe the system in all degrees of freedom. The data sets were generated during the latest operational simulations organised by the data analysis team and this work is part of the LTPDA Matlab toolbox.

  19. Statistical detection of EEG synchrony using empirical bayesian inference.

    PubMed

    Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven

    2015-01-01

    There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  20. A BAYESIAN STATISTICAL APPROACHES FOR THE EVALUATION OF CMAQ

    EPA Science Inventory

    This research focuses on the application of spatial statistical techniques for the evaluation of the Community Multiscale Air Quality (CMAQ) model. The upcoming release version of the CMAQ model was run for the calendar year 2001 and is in the process of being evaluated by EPA an...

  1. Making species salinity sensitivity distributions reflective of naturally occurring communities: using rapid testing and Bayesian statistics.

    PubMed

    Hickey, Graeme L; Kefford, Ben J; Dunlop, Jason E; Craig, Peter S

    2008-11-01

    Species sensitivity distributions (SSDs) may accurately predict the proportion of species in a community that are at hazard from environmental contaminants only if they contain sensitivity data from a large sample of species representative of the mix of species present in the locality or habitat of interest. With current widely accepted ecotoxicological methods, however, this rarely occurs. Two recent suggestions address this problem. First, use rapid toxicity tests, which are less rigorous than conventional tests, to approximate experimentally the sensitivity of many species quickly and in approximate proportion to naturally occurring communities. Second, use expert judgements regarding the sensitivity of higher taxonomic groups (e.g., orders) and Bayesian statistical methods to construct SSDs that reflect the richness (or perceived importance) of these groups. Here, we describe and analyze several models from a Bayesian perspective to construct SSDs from data derived using rapid toxicity testing, combining both rapid test data and expert opinion. We compare these new models with two frequentist approaches, Kaplan-Meier and a log-normal distribution, using a large data set on the salinity sensitivity of freshwater macroinvertebrates from Victoria (Australia). The frequentist log-normal analysis produced a SSD that overestimated the hazard to species relative to the Kaplan-Meier and Bayesian analyses. Of the Bayesian analyses investigated, the introduction of a weighting factor to account for the richness (or importance) of taxonomic groups influenced the calculated hazard to species. Furthermore, Bayesian methods allowed us to determine credible intervals representing SSD uncertainty. We recommend that rapid tests, expert judgements, and novel Bayesian statistical methods be used so that SSDs reflect communities of organisms found in nature.

  2. Structural mapping in statistical word problems: A relational reasoning approach to Bayesian inference.

    PubMed

    Johnson, Eric D; Tubau, Elisabet

    2016-09-27

    Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.

  3. Bayesian Bigot? Statistical Discrimination, Stereotypes, and Employer Decision Making.

    PubMed

    Pager, Devah; Karafin, Diana

    2009-01-01

    Much of the debate over the underlying causes of discrimination centers on the rationality of employer decision making. Economic models of statistical discrimination emphasize the cognitive utility of group estimates as a means of dealing with the problems of uncertainty. Sociological and social-psychological models, by contrast, question the accuracy of group-level attributions. Although mean differences may exist between groups on productivity-related characteristics, these differences are often inflated in their application, leading to much larger differences in individual evaluations than would be warranted by actual group-level trait distributions. In this study, the authors examine the nature of employer attitudes about black and white workers and the extent to which these views are calibrated against their direct experiences with workers from each group. They use data from fifty-five in-depth interviews with hiring managers to explore employers' group-level attributions and their direct observations to develop a model of attitude formation and employer learning.

  4. Bayesian Bigot? Statistical Discrimination, Stereotypes, and Employer Decision Making

    PubMed Central

    Pager, Devah; Karafin, Diana

    2010-01-01

    Much of the debate over the underlying causes of discrimination centers on the rationality of employer decision making. Economic models of statistical discrimination emphasize the cognitive utility of group estimates as a means of dealing with the problems of uncertainty. Sociological and social-psychological models, by contrast, question the accuracy of group-level attributions. Although mean differences may exist between groups on productivity-related characteristics, these differences are often inflated in their application, leading to much larger differences in individual evaluations than would be warranted by actual group-level trait distributions. In this study, the authors examine the nature of employer attitudes about black and white workers and the extent to which these views are calibrated against their direct experiences with workers from each group. They use data from fifty-five in-depth interviews with hiring managers to explore employers’ group-level attributions and their direct observations to develop a model of attitude formation and employer learning. PMID:20686633

  5. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.

    PubMed

    Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J

    2015-07-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf.

  6. Bayesian Statistical Analysis Applied to NAA Data for Neutron Flux Spectrum Determination

    NASA Astrophysics Data System (ADS)

    Chiesa, D.; Previtali, E.; Sisti, M.

    2014-04-01

    In this paper, we present a statistical method, based on Bayesian statistics, to evaluate the neutron flux spectrum from the activation data of different isotopes. The experimental data were acquired during a neutron activation analysis (NAA) experiment [A. Borio di Tigliole et al., Absolute flux measurement by NAA at the Pavia University TRIGA Mark II reactor facilities, ENC 2012 - Transactions Research Reactors, ISBN 978-92-95064-14-0, 22 (2012)] performed at the TRIGA Mark II reactor of Pavia University (Italy). In order to evaluate the neutron flux spectrum, subdivided in energy groups, we must solve a system of linear equations containing the grouped cross sections and the activation rate data. We solve this problem with Bayesian statistical analysis, including the uncertainties of the coefficients and the a priori information about the neutron flux. A program for the analysis of Bayesian hierarchical models, based on Markov Chain Monte Carlo (MCMC) simulations, is used to define the problem statistical model and solve it. The energy group fluxes and their uncertainties are then determined with great accuracy and the correlations between the groups are analyzed. Finally, the dependence of the results on the prior distribution choice and on the group cross section data is investigated to confirm the reliability of the analysis.

  7. A joint inter- and intrascale statistical model for Bayesian wavelet based image denoising.

    PubMed

    Pizurica, Aleksandra; Philips, Wilfried; Lemahieu, Ignace; Acheroy, Marc

    2002-01-01

    This paper presents a new wavelet-based image denoising method, which extends a "geometrical" Bayesian framework. The new method combines three criteria for distinguishing supposedly useful coefficients from noise: coefficient magnitudes, their evolution across scales and spatial clustering of large coefficients near image edges. These three criteria are combined in a Bayesian framework. The spatial clustering properties are expressed in a prior model. The statistical properties concerning coefficient magnitudes and their evolution across scales are expressed in a joint conditional model. The three main novelties with respect to related approaches are (1) the interscale-ratios of wavelet coefficients are statistically characterized and different local criteria for distinguishing useful coefficients from noise are evaluated, (2) a joint conditional model is introduced, and (3) a novel anisotropic Markov random field prior model is proposed. The results demonstrate an improved denoising performance over related earlier techniques.

  8. A new model test in high energy physics in frequentist and Bayesian statistical formalisms

    NASA Astrophysics Data System (ADS)

    Kamenshchikov, A.

    2017-01-01

    A problem of a new physical model test given observed experimental data is a typical one for modern experiments of high energy physics (HEP). A solution of the problem may be provided with two alternative statistical formalisms, namely frequentist and Bayesian, which are widely spread in contemporary HEP searches. A characteristic experimental situation is modeled from general considerations and both the approaches are utilized in order to test a new model. The results are juxtaposed, what demonstrates their consistency in this work. An effect of a systematic uncertainty treatment in the statistical analysis is also considered.

  9. Boosting Bayesian parameter inference of stochastic differential equation models with methods from statistical physics

    NASA Astrophysics Data System (ADS)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods

  10. UNITY: Confronting Supernova Cosmology's Statistical and Systematic Uncertainties in a Unified Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The

    2015-11-01

    While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.

  11. Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.

    PubMed

    Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan

    2015-01-01

    Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.

  12. Spatial heterogeneity and risk factors for stunting among children under age five in Ethiopia: A Bayesian geo-statistical model

    PubMed Central

    Hagos, Seifu; Hailemariam, Damen; WoldeHanna, Tasew; Lindtjørn, Bernt

    2017-01-01

    Background Understanding the spatial distribution of stunting and underlying factors operating at meso-scale is of paramount importance for intervention designing and implementations. Yet, little is known about the spatial distribution of stunting and some discrepancies are documented on the relative importance of reported risk factors. Therefore, the present study aims at exploring the spatial distribution of stunting at meso- (district) scale, and evaluates the effect of spatial dependency on the identification of risk factors and their relative contribution to the occurrence of stunting and severe stunting in a rural area of Ethiopia. Methods A community based cross sectional study was conducted to measure the occurrence of stunting and severe stunting among children aged 0–59 months. Additionally, we collected relevant information on anthropometric measures, dietary habits, parent and child-related demographic and socio-economic status. Latitude and longitude of surveyed households were also recorded. Local Anselin Moran's I was calculated to investigate the spatial variation of stunting prevalence and identify potential local pockets (hotspots) of high prevalence. Finally, we employed a Bayesian geo-statistical model, which accounted for spatial dependency structure in the data, to identify potential risk factors for stunting in the study area. Results Overall, the prevalence of stunting and severe stunting in the district was 43.7% [95%CI: 40.9, 46.4] and 21.3% [95%CI: 19.5, 23.3] respectively. We identified statistically significant clusters of high prevalence of stunting (hotspots) in the eastern part of the district and clusters of low prevalence (cold spots) in the western. We found out that the inclusion of spatial structure of the data into the Bayesian model has shown to improve the fit for stunting model. The Bayesian geo-statistical model indicated that the risk of stunting increased as the child’s age increased (OR 4.74; 95% Bayesian credible

  13. A Bayesian Formulation of Behavioral Control

    ERIC Educational Resources Information Center

    Huys, Quentin J. M.; Dayan, Peter

    2009-01-01

    Helplessness, a belief that the world is not subject to behavioral control, has long been central to our understanding of depression, and has influenced cognitive theories, animal models and behavioral treatments. However, despite its importance, there is no fully accepted definition of helplessness or behavioral control in psychology or…

  14. Predictive data-derived Bayesian statistic-transport model and simulator of sunken oil mass

    NASA Astrophysics Data System (ADS)

    Echavarria Gregory, Maria Angelica

    Sunken oil is difficult to locate because remote sensing techniques cannot as yet provide views of sunken oil over large areas. Moreover, the oil may re-suspend and sink with changes in salinity, sediment load, and temperature, making deterministic fate models difficult to deploy and calibrate when even the presence of sunken oil is difficult to assess. For these reasons, together with the expense of field data collection, there is a need for a statistical technique integrating limited data collection with stochastic transport modeling. Predictive Bayesian modeling techniques have been developed and demonstrated for exploiting limited information for decision support in many other applications. These techniques brought to a multi-modal Lagrangian modeling framework, representing a near-real time approach to locating and tracking sunken oil driven by intrinsic physical properties of field data collected following a spill after oil has begun collecting on a relatively flat bay bottom. Methods include (1) development of the conceptual predictive Bayesian model and multi-modal Gaussian computational approach based on theory and literature review; (2) development of an object-oriented programming and combinatorial structure capable of managing data, integration and computation over an uncertain and highly dimensional parameter space; (3) creating a new bi-dimensional approach of the method of images to account for curved shoreline boundaries; (4) confirmation of model capability for locating sunken oil patches using available (partial) real field data and capability for temporal projections near curved boundaries using simulated field data; and (5) development of a stand-alone open-source computer application with graphical user interface capable of calibrating instantaneous oil spill scenarios, obtaining sets maps of relative probability profiles at different prediction times and user-selected geographic areas and resolution, and capable of performing post

  15. Statistical Inference at Work: Statistical Process Control as an Example

    ERIC Educational Resources Information Center

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  16. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  17. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  18. Assessing compositional variability through graphical analysis and Bayesian statistical approaches: case studies on transgenic crops.

    PubMed

    Harrigan, George G; Harrison, Jay M

    2012-01-01

    New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.

  19. Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models

    PubMed Central

    Burr, Tom

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668

  20. Statistical analysis using the Bayesian nonparametric method for irradiation embrittlement of reactor pressure vessels

    NASA Astrophysics Data System (ADS)

    Takamizawa, Hisashi; Itoh, Hiroto; Nishiyama, Yutaka

    2016-10-01

    In order to understand neutron irradiation embrittlement in high fluence regions, statistical analysis using the Bayesian nonparametric (BNP) method was performed for the Japanese surveillance and material test reactor irradiation database. The BNP method is essentially expressed as an infinite summation of normal distributions, with input data being subdivided into clusters with identical statistical parameters, such as mean and standard deviation, for each cluster to estimate shifts in ductile-to-brittle transition temperature (DBTT). The clusters typically depend on chemical compositions, irradiation conditions, and the irradiation embrittlement. Specific variables contributing to the irradiation embrittlement include the content of Cu, Ni, P, Si, and Mn in the pressure vessel steels, neutron flux, neutron fluence, and irradiation temperatures. It was found that the measured shifts of DBTT correlated well with the calculated ones. Data associated with the same materials were subdivided into the same clusters even if neutron fluences were increased.

  1. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  2. Evaluation of Oceanic Transport Statistics By Use of Transient Tracers and Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Trossman, D. S.; Thompson, L.; Mecking, S.; Bryan, F.; Peacock, S.

    2013-12-01

    Key variables that quantify the time scales over which atmospheric signals penetrate into the oceanic interior and their uncertainties are computed using Bayesian methods and transient tracers from both models and observations. First, the mean residence times, subduction rates, and formation rates of Subtropical Mode Water (STMW) and Subpolar Mode Water (SPMW) in the North Atlantic and Subantarctic Mode Water (SAMW) in the Southern Ocean are estimated by combining a model and observations of chlorofluorocarbon-11 (CFC-11) via Bayesian Model Averaging (BMA), statistical technique that weights model estimates according to how close they agree with observations. Second, a Bayesian method is presented to find two oceanic transport parameters associated with the age distribution of ocean waters, the transit-time distribution (TTD), by combining an eddying global ocean model's estimate of the TTD with hydrographic observations of CFC-11, temperature, and salinity. Uncertainties associated with objectively mapping irregularly spaced bottle data are quantified by making use of a thin-plate spline and then propagated via the two Bayesian techniques. It is found that the subduction of STMW, SPMW, and SAMW is mostly an advective process, but up to about one-third of STMW subduction likely owes to non-advective processes. Also, while the formation of STMW is mostly due to subduction, the formation of SPMW is mostly due to other processes. About half of the formation of SAMW is due to subduction and half is due to other processes. A combination of air-sea flux, acting on relatively short time scales, and turbulent mixing, acting on a wide range of time scales, is likely the dominant SPMW erosion mechanism. Air-sea flux is likely responsible for most STMW erosion, and turbulent mixing is likely responsible for most SAMW erosion. Two oceanic transport parameters, the mean age of a water parcel and the half-variance associated with the TTD, estimated using the model's tracers as

  3. Integrating quantitative PCR and Bayesian statistics in quantifying human adenoviruses in small volumes of source water.

    PubMed

    Wu, Jianyong; Gronewold, Andrew D; Rodriguez, Roberto A; Stewart, Jill R; Sobsey, Mark D

    2014-02-01

    Rapid quantification of viral pathogens in drinking and recreational water can help reduce waterborne disease risks. For this purpose, samples in small volume (e.g. 1L) are favored because of the convenience of collection, transportation and processing. However, the results of viral analysis are often subject to uncertainty. To overcome this limitation, we propose an approach that integrates Bayesian statistics, efficient concentration methods, and quantitative PCR (qPCR) to quantify viral pathogens in water. Using this approach, we quantified human adenoviruses (HAdVs) in eighteen samples of source water collected from six drinking water treatment plants. HAdVs were found in seven samples. In the other eleven samples, HAdVs were not detected by qPCR, but might have existed based on Bayesian inference. Our integrated approach that quantifies uncertainty provides a better understanding than conventional assessments of potential risks to public health, particularly in cases when pathogens may present a threat but cannot be detected by traditional methods.

  4. A Bayesian statistical model for hybrid metrology to improve measurement accuracy

    NASA Astrophysics Data System (ADS)

    Silver, R. M.; Zhang, N. F.; Barnes, B. M.; Qin, J.; Zhou, H.; Dixson, R.

    2011-05-01

    We present a method to combine measurements from different techniques that reduces uncertainties and can improve measurement throughput. The approach directly integrates the measurement analysis of multiple techniques that can include different configurations or platforms. This approach has immediate application when performing model-based optical critical dimension (OCD) measurements. When modeling optical measurements, a library of curves is assembled through the simulation of a multi-dimensional parameter space. Parametric correlation and measurement noise lead to measurement uncertainty in the fitting process with fundamental limitations resulting from the parametric correlations. A strategy to decouple parametric correlation and reduce measurement uncertainties is described. We develop the rigorous underlying Bayesian statistical model and apply this methodology to OCD metrology. We then introduce an approach to damp the regression process to achieve more stable and rapid regression fitting. These methods that use a priori information are shown to reduce measurement uncertainty and improve throughput while also providing an improved foundation for comprehensive reference metrology.

  5. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling

    PubMed Central

    Onisko, Agnieszka; Druzdzel, Marek J.; Austin, R. Marshall

    2016-01-01

    Background: Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. Aim: The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. Materials and Methods: This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan–Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. Results: The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Conclusion: Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches. PMID:28163973

  6. Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mbaya, Timmy; Menghoel, Ole

    2011-01-01

    Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.

  7. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  8. Incorporating Functional Annotations for Fine-Mapping Causal Variants in a Bayesian Framework Using Summary Statistics.

    PubMed

    Chen, Wenan; McDonnell, Shannon K; Thibodeau, Stephen N; Tillmans, Lori S; Schaid, Daniel J

    2016-11-01

    Functional annotations have been shown to improve both the discovery power and fine-mapping accuracy in genome-wide association studies. However, the optimal strategy to incorporate the large number of existing annotations is still not clear. In this study, we propose a Bayesian framework to incorporate functional annotations in a systematic manner. We compute the maximum a posteriori solution and use cross validation to find the optimal penalty parameters. By extending our previous fine-mapping method CAVIARBF into this framework, we require only summary statistics as input. We also derived an exact calculation of Bayes factors using summary statistics for quantitative traits, which is necessary when a large proportion of trait variance is explained by the variants of interest, such as in fine mapping expression quantitative trait loci (eQTL). We compared the proposed method with PAINTOR using different strategies to combine annotations. Simulation results show that the proposed method achieves the best accuracy in identifying causal variants among the different strategies and methods compared. We also find that for annotations with moderate effects from a large annotation pool, screening annotations individually and then combining the top annotations can produce overly optimistic results. We applied these methods on two real data sets: a meta-analysis result of lipid traits and a cis-eQTL study of normal prostate tissues. For the eQTL data, incorporating annotations significantly increased the number of potential causal variants with high probabilities.

  9. Introduction to Bayesian statistical approaches to compositional analyses of transgenic crops 1. Model validation and setting the stage.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Harrigan, George G

    2011-08-01

    Statistical comparisons of compositional data generated on genetically modified (GM) crops and their near-isogenic conventional (non-GM) counterparts typically rely on classical significance testing. This manuscript presents an introduction to Bayesian methods for compositional analysis along with recommendations for model validation. The approach is illustrated using protein and fat data from two herbicide tolerant GM soybeans (MON87708 and MON87708×MON89788) and a conventional comparator grown in the US in 2008 and 2009. Guidelines recommended by the US Food and Drug Administration (FDA) in conducting Bayesian analyses of clinical studies on medical devices were followed. This study is the first Bayesian approach to GM and non-GM compositional comparisons. The evaluation presented here supports a conclusion that a Bayesian approach to analyzing compositional data can provide meaningful and interpretable results. We further describe the importance of method validation and approaches to model checking if Bayesian approaches to compositional data analysis are to be considered viable by scientists involved in GM research and regulation.

  10. Rating locomotive crew diesel emission exposure profiles using statistics and Bayesian Decision Analysis.

    PubMed

    Hewett, Paul; Bullock, William H

    2014-01-01

    For more than 20 years CSX Transportation (CSXT) has collected exposure measurements from locomotive engineers and conductors who are potentially exposed to diesel emissions. The database included measurements for elemental and total carbon, polycyclic aromatic hydrocarbons, aromatics, aldehydes, carbon monoxide, and nitrogen dioxide. This database was statistically analyzed and summarized, and the resulting statistics and exposure profiles were compared to relevant occupational exposure limits (OELs) using both parametric and non-parametric descriptive and compliance statistics. Exposure ratings, using the American Industrial Health Association (AIHA) exposure categorization scheme, were determined using both the compliance statistics and Bayesian Decision Analysis (BDA). The statistical analysis of the elemental carbon data (a marker for diesel particulate) strongly suggests that the majority of levels in the cabs of the lead locomotives (n = 156) were less than the California guideline of 0.020 mg/m(3). The sample 95th percentile was roughly half the guideline; resulting in an AIHA exposure rating of category 2/3 (determined using BDA). The elemental carbon (EC) levels in the trailing locomotives tended to be greater than those in the lead locomotive; however, locomotive crews rarely ride in the trailing locomotive. Lead locomotive EC levels were similar to those reported by other investigators studying locomotive crew exposures and to levels measured in urban areas. Lastly, both the EC sample mean and 95%UCL were less than the Environmental Protection Agency (EPA) reference concentration of 0.005 mg/m(3). With the exception of nitrogen dioxide, the overwhelming majority of the measurements for total carbon, polycyclic aromatic hydrocarbons, aromatics, aldehydes, and combustion gases in the cabs of CSXT locomotives were either non-detects or considerably less than the working OELs for the years represented in the database. When compared to the previous American

  11. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. I. Statistical and Computational Methods

    NASA Astrophysics Data System (ADS)

    Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.

    2016-07-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).

  12. Bayesian statistical modeling of disinfection byproduct (DBP) bromine incorporation in the ICR database.

    PubMed

    Francis, Royce A; Vanbriesen, Jeanne M; Small, Mitchell J

    2010-02-15

    Statistical models are developed for bromine incorporation in the trihalomethane (THM), trihaloacetic acids (THAA), dihaloacetic acid (DHAA), and dihaloacetonitrile (DHAN) subclasses of disinfection byproducts (DBPs) using distribution system samples from plants applying only free chlorine as a primary or residual disinfectant in the Information Collection Rule (ICR) database. The objective of this study is to characterize the effect of water quality conditions before, during, and post-treatment on distribution system bromine incorporation into DBP mixtures. Bayesian Markov Chain Monte Carlo (MCMC) methods are used to model individual DBP concentrations and estimate the coefficients of the linear models used to predict the bromine incorporation fraction for distribution system DBP mixtures in each of the four priority DBP classes. The bromine incorporation models achieve good agreement with the data. The most important predictors of bromine incorporation fraction across DBP classes are alkalinity, specific UV absorption (SUVA), and the bromide to total organic carbon ratio (Br:TOC) at the first point of chlorine addition. Free chlorine residual in the distribution system, distribution system residence time, distribution system pH, turbidity, and temperature only slightly influence bromine incorporation. The bromide to applied chlorine (Br:Cl) ratio is not a significant predictor of the bromine incorporation fraction (BIF) in any of the four classes studied. These results indicate that removal of natural organic matter and the location of chlorine addition are important treatment decisions that have substantial implications for bromine incorporation into disinfection byproduct in drinking waters.

  13. Viscous magnetization, archaeology and Bayesian statistics of small samples from Israel and England

    NASA Astrophysics Data System (ADS)

    Borradaile, Graham J.

    2003-05-01

    Certain limestones remagnetize viscously and noticeably over archaeological time-intervals, after their reorientation into monuments. The laboratory demagnetization temperatures (TUB) for the VRM increase with the installation age; with rates of ~0.07 C0/year for Israel chalk and ~0.1C0/year for English chalk. The empirical relationship may be used to date enigmatic buildings or geomorphological features (e.g., land slips). Such correlations also give some insight into the viscous remagnetization process over time intervals τ <= 4000 years, which are unobtainable in laboratory studies. The TUB-age relationship for the viscous remagnetization appears to follow a power law, linearized as log10(τ) ≈ b log10 (TUB). Different pelagic limestones follow different curves and, whereas conventional regression estimates the power law exponent b, the small sample size recommends a Bayesian statistical approach. From sites constructed with pelagic chalk from eastern England, precise prior information (b = 0.761) is compared with less precise information for much more ancient sites in northern Israel (b = 0.873). The collective posterior correlation shows a generalized power law exponent b = 0.849. That regression explains 84.9% of the collective variance in age (r2 = 0.849). Of course, site-specific calibration is required for archaeological age determinations.

  14. Bayesian Atmospheric Radiative Transfer (BART): Model, Statistics Driver, and Application to HD 209458b

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Blecic, Jasmina; Stemm, Madison M.; Lust, Nate B.; Foster, Andrew S.; Rojo, Patricio M.; Loredo, Thomas J.

    2014-11-01

    Multi-wavelength secondary-eclipse and transit depths probe the thermo-chemical properties of exoplanets. In recent years, several research groups have developed retrieval codes to analyze the existing data and study the prospects of future facilities. However, the scientific community has limited access to these packages. Here we premiere the open-source Bayesian Atmospheric Radiative Transfer (BART) code. We discuss the key aspects of the radiative-transfer algorithm and the statistical package. The radiation code includes line databases for all HITRAN molecules, high-temperature H2O, TiO, and VO, and includes a preprocessor for adding additional line databases without recompiling the radiation code. Collision-induced absorption lines are available for H2-H2 and H2-He. The parameterized thermal and molecular abundance profiles can be modified arbitrarily without recompilation. The generated spectra are integrated over arbitrary bandpasses for comparison to data. BART's statistical package, Multi-core Markov-chain Monte Carlo (MC3), is a general-purpose MCMC module. MC3 implements the Differental-evolution Markov-chain Monte Carlo algorithm (ter Braak 2006, 2009). MC3 converges 20-400 times faster than the usual Metropolis-Hastings MCMC algorithm, and in addition uses the Message Passing Interface (MPI) to parallelize the MCMC chains. We apply the BART retrieval code to the HD 209458b data set to estimate the planet's temperature profile and molecular abundances. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  15. Deriving boundary layer mixing height from LIDAR measurements using a Bayesian statistical inference method.

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Caporaso, L.; di Giuseppe, F.; Bonafè, G.; Gobbi, G. P.; Angelini, A.

    2010-09-01

    The nowadays availability of low-cost commercial LIDAR/ceilometer, provides the opportunity to widely employ these active instruments to furnish continuous observation of the planetary boundary layer (PBL) evolution which could serve the scope of both air-quality model initialization and numerical weather prediction system evaluation. Their range-corrected signal is in fact proportional to the aerosol backscatter cross section, and therefore, in clear conditions, it allows to track the PBL evolution using aerosols as markers. The LIDAR signal is then processed to retrieve an estimate of the PBL mixing height. A standard approach uses the so called wavelet covariance transform (WCT) method which consists in the convolution of the vertical signal with a step function, which is able to detect local discontinuities in the backscatter profile. There are, nevertheless, several drawbacks which have to be considered when the WCT method is employed. Since water droplets may have a very large extinction and backscattering cross section, the presence of rain, clouds or fog decreases the returning signal causing interference and uncertainties in the mixing height retrievals. Moreover, if vertical mixing is scarce, aerosols remain suspended in a persistent residual layer which is detected even if it is not significantly connected to the actual mixing height. Finally, multiple layers are also cause of uncertainties. In this work we present a novel methodology to infer the height of planetary boundary layers (PBLs) from LIDAR data which corrects the unrealistic fluctuations introduced by the WCT method. It implements the assimilation of WCT-PBL heights estimations into a Bayesian statistical inference procedure which includes a physical model for the boundary layer (bulk model) as the first guess hypothesis. A hierarchical Bayesian Markov chain Monte Carlo (MCMC) approach is then used to explore the posterior state space and calculate the data likelihood of previously assigned

  16. Bayesian statistical approaches to compositional analyses of transgenic crops 2. Application and validation of informative prior distributions.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Berman, Kristina H; Harrigan, George G

    2013-03-01

    Bayesian approaches to evaluation of crop composition data allow simpler interpretations than traditional statistical significance tests. An important advantage of Bayesian approaches is that they allow formal incorporation of previously generated data through prior distributions in the analysis steps. This manuscript describes key steps to ensure meaningful and transparent selection and application of informative prior distributions. These include (i) review of previous data in the scientific literature to form the prior distributions, (ii) proper statistical model specification and documentation, (iii) graphical analyses to evaluate the fit of the statistical model to new study data, and (iv) sensitivity analyses to evaluate the robustness of results to the choice of prior distribution. The validity of the prior distribution for any crop component is critical to acceptance of Bayesian approaches to compositional analyses and would be essential for studies conducted in a regulatory setting. Selection and validation of prior distributions for three soybean isoflavones (daidzein, genistein, and glycitein) and two oligosaccharides (raffinose and stachyose) are illustrated in a comparative assessment of data obtained on GM and non-GM soybean seed harvested from replicated field sites at multiple locations in the US during the 2009 growing season.

  17. Bayesian Statistical Analysis of Historical and Late Holocene Rates of Sea-Level Change

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    A fundamental concern associated with climate change is the rate at which sea levels are rising. Studies of past sea level (particularly beyond the instrumental data range) allow modern sea-level rise to be placed in a more complete context. Considering this, we perform a Bayesian statistical analysis on historical and late Holocene rates of sea-level change. The data that form the input to the statistical model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. The aims are to estimate rates of sea-level rise, to determine when modern rates of sea-level rise began and to observe how these rates have been changing over time. Many of the current methods for doing this use simple linear regression to estimate rates. This is often inappropriate as it is too rigid and it can ignore uncertainties that arise as part of the data collection exercise. This can lead to over confidence in the sea-level trends being characterized. The proposed Bayesian model places a Gaussian process prior on the rate process (i.e. the process that determines how rates of sea-level are changing over time). The likelihood of the observed data is the integral of this process. When dealing with proxy reconstructions, this is set in an errors-in-variables framework so as to take account of age uncertainty. It is also necessary, in this case, for the model to account for glacio-isostatic adjustment, which introduces a covariance between individual age and sea-level observations. This method provides a flexible fit and it allows for the direct estimation of the rate process with full consideration of all sources of uncertainty. Analysis of tide-gauge datasets and proxy reconstructions in this way means that changing rates of sea level can be estimated more comprehensively and accurately than previously possible. The model captures the continuous and dynamic evolution of sea-level change and results show that not only are modern sea levels rising but that the rates

  18. An overview of component qualification using Bayesian statistics and energy methods.

    SciTech Connect

    Dohner, Jeffrey Lynn

    2011-09-01

    The below overview is designed to give the reader a limited understanding of Bayesian and Maximum Likelihood (MLE) estimation; a basic understanding of some of the mathematical tools to evaluate the quality of an estimation; an introduction to energy methods and a limited discussion of damage potential. This discussion then goes on to presented a limited presentation as to how energy methods and Bayesian estimation are used together to qualify components. Example problems with solutions have been supplied as a learning aid. Bold letters are used to represent random variables. Un-bolded letter represent deterministic values. A concluding section presents a discussion of attributes and concerns.

  19. Statistical Physics for Adaptive Distributed Control

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.

  20. A statistical concept to assess the uncertainty in Bayesian model weights and its impact on model ranking

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2015-09-01

    Bayesian model averaging (BMA) ranks the plausibility of alternative conceptual models according to Bayes' theorem. A prior belief about each model's adequacy is updated to a posterior model probability based on the skill to reproduce observed data and on the principle of parsimony. The posterior model probabilities are then used as model weights for model ranking, selection, or averaging. Despite the statistically rigorous BMA procedure, model weights can become uncertain quantities due to measurement noise in the calibration data set or due to uncertainty in model input. Uncertain weights may in turn compromise the reliability of BMA results. We present a new statistical concept to investigate this weighting uncertainty, and thus, to assess the significance of model weights and the confidence in model ranking. Our concept is to resample the uncertain input or output data and then to analyze the induced variability in model weights. In the special case of weighting uncertainty due to measurement noise in the calibration data set, we interpret statistics of Bayesian model evidence to assess the distance of a model's performance from the theoretical upper limit. To illustrate our suggested approach, we investigate the reliability of soil-plant model selection following up on a study by Wöhling et al. (2015). Results show that the BMA routine should be equipped with our suggested upgrade to (1) reveal the significant but otherwise undetected impact of measurement noise on model ranking results and (2) to decide whether the considered set of models should be extended with better performing alternatives.

  1. Deducing conformational variability of intrinsically disordered proteins from infrared spectroscopy with Bayesian statistics

    PubMed Central

    Sethi, Anurag; Anunciado, Divina; Tian, Jianhui; Vu, Dung M.; Gnanakaran, S.

    2013-01-01

    As it remains practically impossible to generate ergodic ensembles for large intrinsically disordered proteins (IDP) with molecular dynamics (MD) simulations, it becomes critical to compare spectroscopic characteristics of the theoretically generated ensembles to corresponding measurements. We develop a Bayesian framework to infer the ensemble properties of an IDP using a combination of conformations generated by MD simulations and its measured infrared spectrum. We performed 100 different MD simulations totaling more than 10 µs to characterize the conformational ensemble of αsynuclein, a prototypical IDP, in water. These conformations are clustered based on solvent accessibility and helical content. We compute the amide-I band for these clusters and predict the thermodynamic weights of each cluster given the measured amide-I band. Bayesian analysis produces a reproducible and non-redundant set of thermodynamic weights for each cluster, which can then be used to calculate the ensemble properties. In a rigorous validation, these weights reproduce measured chemical shifts. PMID:24187427

  2. Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.

    PubMed

    Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G

    2016-07-26

    The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel

  3. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  4. An absolute chronology for early Egypt using radiocarbon dating and Bayesian statistical modelling.

    PubMed

    Dee, Michael; Wengrow, David; Shortland, Andrew; Stevenson, Alice; Brock, Fiona; Girdland Flink, Linus; Bronk Ramsey, Christopher

    2013-11-08

    The Egyptian state was formed prior to the existence of verifiable historical records. Conventional dates for its formation are based on the relative ordering of artefacts. This approach is no longer considered sufficient for cogent historical analysis. Here, we produce an absolute chronology for Early Egypt by combining radiocarbon and archaeological evidence within a Bayesian paradigm. Our data cover the full trajectory of Egyptian state formation and indicate that the process occurred more rapidly than previously thought. We provide a timeline for the First Dynasty of Egypt of generational-scale resolution that concurs with prevailing archaeological analysis and produce a chronometric date for the foundation of Egypt that distinguishes between historical estimates.

  5. On the limitations of standard statistical modeling in biological systems: a full Bayesian approach for biology.

    PubMed

    Gomez-Ramirez, Jaime; Sanz, Ricardo

    2013-09-01

    One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist.

  6. Uncovering selection bias in case-control studies using Bayesian post-stratification.

    PubMed

    Geneletti, S; Best, N; Toledano, M B; Elliott, P; Richardson, S

    2013-07-10

    Case-control studies are particularly prone to selection bias, which can affect odds ratio estimation. Approaches to discovering and adjusting for selection bias have been proposed in the literature using graphical and heuristic tools as well as more complex statistical methods. The approach we propose is based on a survey-weighting method termed Bayesian post-stratification and follows from the conditional independences that characterise selection bias. We use our approach to perform a selection bias sensitivity analysis by using ancillary data sources that describe the target case-control population to re-weight the odds ratio estimates obtained from the study. The method is applied to two case-control studies, the first investigating the association between exposure to electromagnetic fields and acute lymphoblastic leukaemia in children and the second investigating the association between maternal occupational exposure to hairspray and a congenital anomaly in male babies called hypospadias. In both case-control studies, our method showed that the odds ratios were only moderately sensitive to selection bias.

  7. STATISTICS OF MEASURING NEUTRON STAR RADII: ASSESSING A FREQUENTIST AND A BAYESIAN APPROACH

    SciTech Connect

    Özel, Feryal; Psaltis, Dimitrios

    2015-09-10

    Measuring neutron star radii with spectroscopic and timing techniques relies on the combination of multiple observables to break the degeneracies between the mass and radius introduced by general relativistic effects. Here, we explore a previously used frequentist and a newly proposed Bayesian framework to obtain the most likely value and the uncertainty in such a measurement. We find that for the expected range of masses and radii and for realistic measurement errors, the frequentist approach suffers from biases that are larger than the accuracy in the radius measurement required to distinguish between the different equations of state. In contrast, in the Bayesian framework, the inferred uncertainties are larger, but the most likely values do not suffer from such biases. We also investigate ways of quantifying the degree of consistency between different spectroscopic measurements from a single source. We show that a careful assessment of the systematic uncertainties in the measurements eliminates the need for introducing ad hoc biases, which lead to artificially large inferred radii.

  8. Back to BaySICS: a user-friendly program for Bayesian Statistical Inference from Coalescent Simulations.

    PubMed

    Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love

    2014-01-01

    Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.

  9. Applied Behavior Analysis and Statistical Process Control?

    ERIC Educational Resources Information Center

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  10. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  11. Comparing Trend and Gap Statistics across Tests: Distributional Change Using Ordinal Methods and Bayesian Inference

    ERIC Educational Resources Information Center

    Denbleyker, John Nickolas

    2012-01-01

    The shortcomings of the proportion above cut (PAC) statistic used so prominently in the educational landscape renders it a very problematic measure for making correct inferences with student test data. The limitations of PAC-based statistics are more pronounced with cross-test comparisons due to their dependency on cut-score locations. A better…

  12. Bayesian clinical trials in action.

    PubMed

    Lee, J Jack; Chu, Caleb T

    2012-11-10

    Although the frequentist paradigm has been the predominant approach to clinical trial design since the 1940s, it has several notable limitations. Advancements in computational algorithms and computer hardware have greatly enhanced the alternative Bayesian paradigm. Compared with its frequentist counterpart, the Bayesian framework has several unique advantages, and its incorporation into clinical trial design is occurring more frequently. Using an extensive literature review to assess how Bayesian methods are used in clinical trials, we find them most commonly used for dose finding, efficacy monitoring, toxicity monitoring, diagnosis/decision making, and studying pharmacokinetics/pharmacodynamics. The additional infrastructure required for implementing Bayesian methods in clinical trials may include specialized software programs to run the study design, simulation and analysis, and web-based applications, all of which are particularly useful for timely data entry and analysis. Trial success requires not only the development of proper tools but also timely and accurate execution of data entry, quality control, adaptive randomization, and Bayesian computation. The relative merit of the Bayesian and frequentist approaches continues to be the subject of debate in statistics. However, more evidence can be found showing the convergence of the two camps, at least at the practical level. Ultimately, better clinical trial methods lead to more efficient designs, lower sample sizes, more accurate conclusions, and better outcomes for patients enrolled in the trials. Bayesian methods offer attractive alternatives for better trials. More Bayesian trials should be designed and conducted to refine the approach and demonstrate their real benefit in action.

  13. Statistical flaw characterization through Bayesian shape inversion from scattered wave observations

    NASA Astrophysics Data System (ADS)

    McMahan, Jerry A.; Criner, Amanda K.

    2016-02-01

    A method is discussed to characterize the shape of a flaw from noisy far-field measurements of a scattered wave. The scattering model employed is a two-dimensional Helmholtz equation which quantifies scattering due to interrogating signals from various physical phenomena such as acoustics or electromagnetics. The well-known inherent ill-posedness of the inverse scattering problem is addressed via Bayesian regularization. The method is loosely related to the approach described in [1] which uses the framework of [2] to prove the well-posedness of the infinite-dimensional problem and derive estimates of the error for a particular discretization approach. The method computes the posterior probability density for the flaw shape from the scattered field observations, taking into account prior assumptions which are used to describe any a priori knowledge of the flaw. We describe the computational approach to the forward problem as well as the Markov chain Monte Carlo (MCMC) based approach to approximating the posterior. We present simulation results for some hypothetical flaw shapes with varying levels of observation error and arrangement of observation points. The results show how the posterior probability density can be used to visualize the shape of the flaw taking into account the quantitative confidence in the quality of the estimation and how various arrangements of the measurements and interrogating signals affect the estimation

  14. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  15. A Bayesian statistical analysis of behavioral facilitation associated with deep brain stimulation

    PubMed Central

    Smith, Anne C; Shah, Sudhin A; Hudson, Andrew E; Purpura, Keith P; Victor, Jonathan D; Brown, Emery N; Schiff, Nicholas D

    2009-01-01

    Deep brain stimulation (DBS) is an established therapy for Parkinson’s Disease and is being investigated as a treatment for chronic depression, obsessive compulsive disorder and for facilitating functional recovery of patients in minimally conscious states following brain injury. For all of these applications, quantitative assessments of the behavioral effects of DBS are crucial to determine whether the therapy is effective and, if so, how stimulation parameters can be optimized. Behavioral analyses for DBS are challenging because subject performance is typically assessed from only a small set of discrete measurements made on a discrete rating scale, the time course of DBS effects is unknown, and between-subject differences are often large. We demonstrate how Bayesian state-space methods can be used to characterize the relationship between DBS and behavior comparing our approach with logistic regression in two experiments: the effects of DBS on attention of a macaque monkey performing a reaction-time task, and the effects of DBS on motor behavior of a human patient in a minimally conscious state. The state-space analysis can assess the magnitude of DBS behavioral facilitation (positive or negative) at specific time points and has important implications for developing principled strategies to optimize DBS paradigms. PMID:19576932

  16. Multiple LacI-mediated loops revealed by Bayesian statistics and tethered particle motion

    PubMed Central

    Johnson, Stephanie; van de Meent, Jan-Willem; Phillips, Rob; Wiggins, Chris H.; Lindén, Martin

    2014-01-01

    The bacterial transcription factor LacI loops DNA by binding to two separate locations on the DNA simultaneously. Despite being one of the best-studied model systems for transcriptional regulation, the number and conformations of loop structures accessible to LacI remain unclear, though the importance of multiple coexisting loops has been implicated in interactions between LacI and other cellular regulators of gene expression. To probe this issue, we have developed a new analysis method for tethered particle motion, a versatile and commonly used in vitro single-molecule technique. Our method, vbTPM, performs variational Bayesian inference in hidden Markov models. It learns the number of distinct states (i.e. DNA–protein conformations) directly from tethered particle motion data with better resolution than existing methods, while easily correcting for common experimental artifacts. Studying short (roughly 100 bp) LacI-mediated loops, we provide evidence for three distinct loop structures, more than previously reported in single-molecule studies. Moreover, our results confirm that changes in LacI conformation and DNA-binding topology both contribute to the repertoire of LacI-mediated loops formed in vitro, and provide qualitatively new input for models of looping and transcriptional regulation. We expect vbTPM to be broadly useful for probing complex protein–nucleic acid interactions. PMID:25120267

  17. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component

  18. Radiative transfer meets Bayesian statistics: where does a galaxy's [C II] emission come from?

    NASA Astrophysics Data System (ADS)

    Accurso, G.; Saintonge, A.; Bisbas, T. G.; Viti, S.

    2017-01-01

    The [C II] 158 μm emission line can arise in all phases of the interstellar medium (ISM), therefore being able to disentangle the different contributions is an important yet unresolved problem when undertaking galaxy-wide, integrated [C II] observations. We present a new multiphase 3D radiative transfer interface that couples STARBURST99, a stellar spectrophotometric code, with the photoionization and astrochemistry codes MOCASSIN and 3D-PDR. We model entire star-forming regions, including the ionized, atomic, and molecular phases of the ISM, and apply a Bayesian inference methodology to parametrize how the fraction of the [C II] emission originating from molecular regions, f_{[C II],mol}, varies as a function of typical integrated properties of galaxies in the local Universe. The main parameters responsible for the variations of f_{[C II],mol} are specific star formation rate (SSFR), gas phase metallicity, H II region electron number density (ne), and dust mass fraction. For example, f_{[C II],mol} can increase from 60 to 80 per cent when either ne increases from 101.5 to 102.5 cm-3, or SSFR decreases from 10-9.6 to 10-10.6 yr-1. Our model predicts for the Milky Way that f_{[C II],mol} = 75.8 ± 5.9 per cent, in agreement with the measured value of 75 per cent. When applying the new prescription to a complete sample of galaxies from the Herschel Reference Survey, we find that anywhere from 60 to 80 per cent of the total integrated [C II] emission arises from molecular regions.

  19. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate

  20. Statistical quality control through overall vibration analysis

    NASA Astrophysics Data System (ADS)

    Carnero, M. a. Carmen; González-Palma, Rafael; Almorza, David; Mayorga, Pedro; López-Escobar, Carlos

    2010-05-01

    The present study introduces the concept of statistical quality control in automotive wheel bearings manufacturing processes. Defects on products under analysis can have a direct influence on passengers' safety and comfort. At present, the use of vibration analysis on machine tools for quality control purposes is not very extensive in manufacturing facilities. Noise and vibration are common quality problems in bearings. These failure modes likely occur under certain operating conditions and do not require high vibration amplitudes but relate to certain vibration frequencies. The vibration frequencies are affected by the type of surface problems (chattering) of ball races that are generated through grinding processes. The purpose of this paper is to identify grinding process variables that affect the quality of bearings by using statistical principles in the field of machine tools. In addition, an evaluation of the quality results of the finished parts under different combinations of process variables is assessed. This paper intends to establish the foundations to predict the quality of the products through the analysis of self-induced vibrations during the contact between the grinding wheel and the parts. To achieve this goal, the overall self-induced vibration readings under different combinations of process variables are analysed using statistical tools. The analysis of data and design of experiments follows a classical approach, considering all potential interactions between variables. The analysis of data is conducted through analysis of variance (ANOVA) for data sets that meet normality and homoscedasticity criteria. This paper utilizes different statistical tools to support the conclusions such as chi squared, Shapiro-Wilks, symmetry, Kurtosis, Cochran, Hartlett, and Hartley and Krushal-Wallis. The analysis presented is the starting point to extend the use of predictive techniques (vibration analysis) for quality control. This paper demonstrates the existence

  1. Statistical Process Control for KSC Processing

    NASA Technical Reports Server (NTRS)

    Ford, Roger G.; Delgado, Hector; Tilley, Randy

    1996-01-01

    The 1996 Summer Faculty Fellowship Program and Kennedy Space Center (KSC) served as the basis for a research effort into statistical process control for KSC processing. The effort entailed several tasks and goals. The first was to develop a customized statistical process control (SPC) course for the Safety and Mission Assurance Trends Analysis Group. The actual teaching of this course took place over several weeks. In addition, an Internet version of the same course complete with animation and video excerpts from the course when it was taught at KSC was developed. The application of SPC to shuttle processing took up the rest of the summer research project. This effort entailed the evaluation of SPC use at KSC, both present and potential, due to the change in roles for NASA and the Single Flight Operations Contractor (SFOC). Individual consulting on SPC use was accomplished as well as an evaluation of SPC software for KSC use in the future. A final accomplishment of the orientation of the author to NASA changes, terminology, data format, and new NASA task definitions will allow future consultation when the needs arise.

  2. Bayesian Clinical Trials in Action

    PubMed Central

    Lee, J. Jack; Chu, Caleb T.

    2012-01-01

    Although the frequentist paradigm has been the predominant approach to clinical trial design since the 1940s, it has several notable limitations. The alternative Bayesian paradigm has been greatly enhanced by advancements in computational algorithms and computer hardware. Compared to its frequentist counterpart, the Bayesian framework has several unique advantages, and its incorporation into clinical trial design is occurring more frequently. Using an extensive literature review to assess how Bayesian methods are used in clinical trials, we find them most commonly used for dose finding, efficacy monitoring, toxicity monitoring, diagnosis/decision making, and for studying pharmacokinetics/pharmacodynamics. The additional infrastructure required for implementing Bayesian methods in clinical trials may include specialized software programs to run the study design, simulation, and analysis, and Web-based applications, which are particularly useful for timely data entry and analysis. Trial success requires not only the development of proper tools but also timely and accurate execution of data entry, quality control, adaptive randomization, and Bayesian computation. The relative merit of the Bayesian and frequentist approaches continues to be the subject of debate in statistics. However, more evidence can be found showing the convergence of the two camps, at least at the practical level. Ultimately, better clinical trial methods lead to more efficient designs, lower sample sizes, more accurate conclusions, and better outcomes for patients enrolled in the trials. Bayesian methods offer attractive alternatives for better trials. More such trials should be designed and conducted to refine the approach and demonstrate its real benefit in action. PMID:22711340

  3. Bayesian statistics as a new tool for spectral analysis - I. Application for the determination of basic parameters of massive stars

    NASA Astrophysics Data System (ADS)

    Mugnes, J.-M.; Robert, C.

    2015-11-01

    Spectral analysis is a powerful tool to investigate stellar properties and it has been widely used for decades now. However, the methods considered to perform this kind of analysis are mostly based on iteration among a few diagnostic lines to determine the stellar parameters. While these methods are often simple and fast, they can lead to errors and large uncertainties due to the required assumptions. Here, we present a method based on Bayesian statistics to find simultaneously the best combination of effective temperature, surface gravity, projected rotational velocity, and microturbulence velocity, using all the available spectral lines. Different tests are discussed to demonstrate the strength of our method, which we apply to 54 mid-resolution spectra of field and cluster B stars obtained at the Observatoire du Mont-Mégantic. We compare our results with those found in the literature. Differences are seen which are well explained by the different methods used. We conclude that the B-star microturbulence velocities are often underestimated. We also confirm the trend that B stars in clusters are on average faster rotators than field B stars.

  4. Bayesian Statistical Analyses for Presence of Single Genes Affecting Meat Quality Traits in a Crossed Pig Population

    PubMed Central

    Janss, LLG.; Van-Arendonk, JAM.; Brascamp, E. W.

    1997-01-01

    Presence of single genes affecting meat quality traits was investigated in F(2) individuals of a cross between Chinese Meishan and Western pig lines using phenotypic measurements on 11 traits. A Bayesian approach was used for inference about a mixed model of inheritance, postulating effects of polygenic background genes, action of a biallelic autosomal single gene and various nongenetic effects. Cooking loss, drip loss, two pH measurements, intramuscular fat, shearforce and back-fat thickness were traits found to be likely influenced by a single gene. In all cases, a recessive allele was found, which likely originates from the Meishan breed and is absent in the Western founder lines. By studying associations between genotypes assigned to individuals based on phenotypic measurements for various traits, it was concluded that cooking loss, two pH measurements and possibly backfat thickness are influenced by one gene, and that a second gene influences intramuscular fat and possibly shearforce and drip loss. Statistical findings were supported by demonstrating marked differences in variances of families of fathers inferred as carriers and those inferred as noncarriers. It is concluded that further molecular genetic research effort to map single genes affecting these traits based on the same experimental data has a high probability of success. PMID:9071593

  5. Paleotempestological chronology developed from gas ion source AMS analysis of carbonates determined through real-time Bayesian statistical approach

    NASA Astrophysics Data System (ADS)

    Wallace, D. J.; Rosenheim, B. E.; Roberts, M. L.; Burton, J. R.; Donnelly, J. P.; Woodruff, J. D.

    2014-12-01

    Is a small quantity of high-precision ages more robust than a higher quantity of lower-precision ages for sediment core chronologies? AMS Radiocarbon ages have been available to researchers for several decades now, and precision of the technique has continued to improve. Analysis and time cost is high, though, and projects are often limited in terms of the number of dates that can be used to develop a chronology. The Gas Ion Source at the National Ocean Sciences Accelerator Mass Spectrometry Facility (NOSAMS), while providing lower-precision (uncertainty of order 100 14C y for a sample), is significantly less expensive and far less time consuming than conventional age dating and offers the unique opportunity for large amounts of ages. Here we couple two approaches, one analytical and one statistical, to investigate the utility of an age model comprised of these lower-precision ages for paleotempestology. We use a gas ion source interfaced to a gas-bench type device to generate radiocarbon dates approximately every 5 minutes while determining the order of sample analysis using the published Bayesian accumulation histories for deposits (Bacon). During two day-long sessions, several dates were obtained from carbonate shells in living position in a sediment core comprised of sapropel gel from Mangrove Lake, Bermuda. Samples were prepared where large shells were available, and the order of analysis was determined by the depth with the highest uncertainty according to Bacon. We present the results of these analyses as well as a prognosis for a future where such age models can be constructed from many dates that are quickly obtained relative to conventional radiocarbon dates. This technique currently is limited to carbonates, but development of a system for organic material dating is underway. We will demonstrate the extent to which sacrificing some analytical precision in favor of more dates improves age models.

  6. Planetary micro-rover operations on Mars using a Bayesian framework for inference and control

    NASA Astrophysics Data System (ADS)

    Post, Mark A.; Li, Junquan; Quine, Brendan M.

    2016-03-01

    With the recent progress toward the application of commercially-available hardware to small-scale space missions, it is now becoming feasible for groups of small, efficient robots based on low-power embedded hardware to perform simple tasks on other planets in the place of large-scale, heavy and expensive robots. In this paper, we describe design and programming of the Beaver micro-rover developed for Northern Light, a Canadian initiative to send a small lander and rover to Mars to study the Martian surface and subsurface. For a small, hardware-limited rover to handle an uncertain and mostly unknown environment without constant management by human operators, we use a Bayesian network of discrete random variables as an abstraction of expert knowledge about the rover and its environment, and inference operations for control. A framework for efficient construction and inference into a Bayesian network using only the C language and fixed-point mathematics on embedded hardware has been developed for the Beaver to make intelligent decisions with minimal sensor data. We study the performance of the Beaver as it probabilistically maps a simple outdoor environment with sensor models that include uncertainty. Results indicate that the Beaver and other small and simple robotic platforms can make use of a Bayesian network to make intelligent decisions in uncertain planetary environments.

  7. Statistical process control for radiotherapy quality assurance.

    PubMed

    Pawlicki, Todd; Whitaker, Matthew; Boyer, Arthur L

    2005-09-01

    Every quality assurance process uncovers random and systematic errors. These errors typically consist of many small random errors and a very few number of large errors that dominate the result. Quality assurance practices in radiotherapy do not adequately differentiate between these two sources of error. The ability to separate these types of errors would allow the dominant source(s) of error to be efficiently detected and addressed. In this work, statistical process control is applied to quality assurance in radiotherapy for the purpose of setting action thresholds that differentiate between random and systematic errors. The theoretical development and implementation of process behavior charts are described. We report on a pilot project is which these techniques are applied to daily output and flatness/symmetry quality assurance for a 10 MV photon beam in our department. This clinical case was followed over 52 days. As part of our investigation, we found that action thresholds set using process behavior charts were able to identify systematic changes in our daily quality assurance process. This is in contrast to action thresholds set using the standard deviation, which did not identify the same systematic changes in the process. The process behavior thresholds calculated from a subset of the data detected a 2% change in the process whereas with a standard deviation calculation, no change was detected. Medical physicists must make decisions on quality assurance data as it is acquired. Process behavior charts help decide when to take action and when to acquire more data before making a change in the process.

  8. Bayesian inference for causal mechanisms with application to a randomized study for postoperative pain control.

    PubMed

    Baccini, Michela; Mattei, Alessandra; Mealli, Fabrizia

    2017-03-15

    We conduct principal stratification and mediation analysis to investigate to what extent the positive overall effect of treatment on postoperative pain control is mediated by postoperative self administration of intra-venous analgesia by patients in a prospective, randomized, double-blind study. Using the Bayesian approach for inference, we estimate both associative and dissociative principal strata effects arising in principal stratification, as well as natural effects from mediation analysis. We highlight that principal stratification and mediation analysis focus on different causal estimands, answer different causal questions, and involve different sets of structural assumptions.

  9. Bayesian sample sizes for exploratory clinical trials comparing multiple experimental treatments with a control.

    PubMed

    Whitehead, John; Cleary, Faye; Turner, Amanda

    2015-05-30

    In this paper, a Bayesian approach is developed for simultaneously comparing multiple experimental treatments with a common control treatment in an exploratory clinical trial. The sample size is set to ensure that, at the end of the study, there will be at least one treatment for which the investigators have a strong belief that it is better than control, or else they have a strong belief that none of the experimental treatments are substantially better than control. This criterion bears a direct relationship with conventional frequentist power requirements, while allowing prior opinion to feature in the analysis with a consequent reduction in sample size. If it is concluded that at least one of the experimental treatments shows promise, then it is envisaged that one or more of these promising treatments will be developed further in a definitive phase III trial. The approach is developed in the context of normally distributed responses sharing a common standard deviation regardless of treatment. To begin with, the standard deviation will be assumed known when the sample size is calculated. The final analysis will not rely upon this assumption, although the intended properties of the design may not be achieved if the anticipated standard deviation turns out to be inappropriate. Methods that formally allow for uncertainty about the standard deviation, expressed in the form of a Bayesian prior, are then explored. Illustrations of the sample sizes computed from the new method are presented, and comparisons are made with frequentist methods devised for the same situation.

  10. Analysis of Feature Intervisibility and Cumulative Visibility Using GIS, Bayesian and Spatial Statistics: A Study from the Mandara Mountains, Northern Cameroon

    PubMed Central

    Wright, David K.; MacEachern, Scott; Lee, Jaeyong

    2014-01-01

    The locations of diy-geδ-bay (DGB) sites in the Mandara Mountains, northern Cameroon are hypothesized to occur as a function of their ability to see and be seen from points on the surrounding landscape. A series of geostatistical, two-way and Bayesian logistic regression analyses were performed to test two hypotheses related to the intervisibility of the sites to one another and their visual prominence on the landscape. We determine that the intervisibility of the sites to one another is highly statistically significant when compared to 10 stratified-random permutations of DGB sites. Bayesian logistic regression additionally demonstrates that the visibility of the sites to points on the surrounding landscape is statistically significant. The location of sites appears to have also been selected on the basis of lower slope than random permutations of sites. Using statistical measures, many of which are not commonly employed in archaeological research, to evaluate aspects of visibility on the landscape, we conclude that the placement of DGB sites improved their conspicuousness for enhanced ritual, social cooperation and/or competition purposes. PMID:25383883

  11. Bayesian inversion of marine controlled source electromagnetic data offshore Vancouver Island, Canada

    NASA Astrophysics Data System (ADS)

    Gehrmann, Romina A. S.; Schwalenberg, Katrin; Riedel, Michael; Spence, George D.; Spieß, Volkhard; Dosso, Stan E.

    2016-01-01

    This paper applies nonlinear Bayesian inversion to marine controlled source electromagnetic (CSEM) data collected near two sites of the Integrated Ocean Drilling Program (IODP) Expedition 311 on the northern Cascadia Margin to investigate subseafloor resistivity structure related to gas hydrate deposits and cold vents. The Cascadia margin, off the west coast of Vancouver Island, Canada, has a large accretionary prism where sediments are under pressure due to convergent plate boundary tectonics. Gas hydrate deposits and cold vent structures have previously been investigated by various geophysical methods and seabed drilling. Here, we invert time-domain CSEM data collected at Sites U1328 and U1329 of IODP Expedition 311 using Bayesian methods to derive subsurface resistivity model parameters and uncertainties. The Bayesian information criterion is applied to determine the amount of structure (number of layers in a depth-dependent model) that can be resolved by the data. The parameter space is sampled with the Metropolis-Hastings algorithm in principal-component space, utilizing parallel tempering to ensure wider and efficient sampling and convergence. Nonlinear inversion allows analysis of uncertain acquisition parameters such as time delays between receiver and transmitter clocks as well as input electrical current amplitude. Marginalizing over these instrument parameters in the inversion accounts for their contribution to the geophysical model uncertainties. One-dimensional inversion of time-domain CSEM data collected at measurement sites along a survey line allows interpretation of the subsurface resistivity structure. The data sets can be generally explained by models with 1 to 3 layers. Inversion results at U1329, at the landward edge of the gas hydrate stability zone, indicate a sediment unconformity as well as potential cold vents which were previously unknown. The resistivities generally increase upslope due to sediment erosion along the slope. Inversion

  12. Bayesian learning

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    In 1983 and 1984, the Infrared Astronomical Satellite (IRAS) detected 5,425 stellar objects and measured their infrared spectra. In 1987 a program called AUTOCLASS used Bayesian inference methods to discover the classes present in these data and determine the most probable class of each object, revealing unknown phenomena in astronomy. AUTOCLASS has rekindled the old debate on the suitability of Bayesian methods, which are computationally intensive, interpret probabilities as plausibility measures rather than frequencies, and appear to depend on a subjective assessment of the probability of a hypothesis before the data were collected. Modern statistical methods have, however, recently been shown to also depend on subjective elements. These debates bring into question the whole tradition of scientific objectivity and offer scientists a new way to take responsibility for their findings and conclusions.

  13. Artificial Intelligence Approach to Support Statistical Quality Control Teaching

    ERIC Educational Resources Information Center

    Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno

    2006-01-01

    Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…

  14. Hierarchical Bayesian Logistic Regression to forecast metabolic control in type 2 DM patients.

    PubMed

    Dagliati, Arianna; Malovini, Alberto; Decata, Pasquale; Cogni, Giulia; Teliti, Marsida; Sacchi, Lucia; Cerra, Carlo; Chiovato, Luca; Bellazzi, Riccardo

    2016-01-01

    In this work we present our efforts in building a model able to forecast patients' changes in clinical conditions when repeated measurements are available. In this case the available risk calculators are typically not applicable. We propose a Hierarchical Bayesian Logistic Regression model, which allows taking into account individual and population variability in model parameters estimate. The model is used to predict metabolic control and its variation in type 2 diabetes mellitus. In particular we have analyzed a population of more than 1000 Italian type 2 diabetic patients, collected within the European project Mosaic. The results obtained in terms of Matthews Correlation Coefficient are significantly better than the ones gathered with standard logistic regression model, based on data pooling.

  15. Hierarchical Bayesian Logistic Regression to forecast metabolic control in type 2 DM patients

    PubMed Central

    Dagliati, Arianna; Malovini, Alberto; Decata, Pasquale; Cogni, Giulia; Teliti, Marsida; Sacchi, Lucia; Cerra, Carlo; Chiovato, Luca; Bellazzi, Riccardo

    2016-01-01

    In this work we present our efforts in building a model able to forecast patients’ changes in clinical conditions when repeated measurements are available. In this case the available risk calculators are typically not applicable. We propose a Hierarchical Bayesian Logistic Regression model, which allows taking into account individual and population variability in model parameters estimate. The model is used to predict metabolic control and its variation in type 2 diabetes mellitus. In particular we have analyzed a population of more than 1000 Italian type 2 diabetic patients, collected within the European project Mosaic. The results obtained in terms of Matthews Correlation Coefficient are significantly better than the ones gathered with standard logistic regression model, based on data pooling. PMID:28269842

  16. Improving statistical analysis of matched case-control studies.

    PubMed

    Conway, Aaron; Rolley, John X; Fulbrook, Paul; Page, Karen; Thompson, David R

    2013-06-01

    Matched case-control research designs can be useful because matching can increase power due to reduced variability between subjects. However, inappropriate statistical analysis of matched data could result in a change in the strength of association between the dependent and independent variables or a change in the significance of the findings. We sought to ascertain whether matched case-control studies published in the nursing literature utilized appropriate statistical analyses. Of 41 articles identified that met the inclusion criteria, 31 (76%) used an inappropriate statistical test for comparing data derived from case subjects and their matched controls. In response to this finding, we developed an algorithm to support decision-making regarding statistical tests for matched case-control studies.

  17. Inference of Environmental Factor-Microbe and Microbe-Microbe Associations from Metagenomic Data Using a Hierarchical Bayesian Statistical Model.

    PubMed

    Yang, Yuqing; Chen, Ning; Chen, Ting

    2017-01-25

    The inference of associations between environmental factors and microbes and among microbes is critical to interpreting metagenomic data, but compositional bias, indirect associations resulting from common factors, and variance within metagenomic sequencing data limit the discovery of associations. To account for these problems, we propose metagenomic Lognormal-Dirichlet-Multinomial (mLDM), a hierarchical Bayesian model with sparsity constraints, to estimate absolute microbial abundance and simultaneously infer both conditionally dependent associations among microbes and direct associations between microbes and environmental factors. We empirically show the effectiveness of the mLDM model using synthetic data, data from the TARA Oceans project, and a colorectal cancer dataset. Finally, we apply mLDM to 16S sequencing data from the western English Channel and report several associations. Our model can be used on both natural environmental and human metagenomic datasets, promoting the understanding of associations in the microbial community.

  18. IZI: INFERRING THE GAS PHASE METALLICITY (Z) AND IONIZATION PARAMETER (q) OF IONIZED NEBULAE USING BAYESIAN STATISTICS

    SciTech Connect

    Blanc, Guillermo A.; Kewley, Lisa; Vogt, Frédéric P. A.; Dopita, Michael A.

    2015-01-10

    We present a new method for inferring the metallicity (Z) and ionization parameter (q) of H II regions and star-forming galaxies using strong nebular emission lines (SELs). We use Bayesian inference to derive the joint and marginalized posterior probability density functions for Z and q given a set of observed line fluxes and an input photoionization model. Our approach allows the use of arbitrary sets of SELs and the inclusion of flux upper limits. The method provides a self-consistent way of determining the physical conditions of ionized nebulae that is not tied to the arbitrary choice of a particular SEL diagnostic and uses all the available information. Unlike theoretically calibrated SEL diagnostics, the method is flexible and not tied to a particular photoionization model. We describe our algorithm, validate it against other methods, and present a tool that implements it called IZI. Using a sample of nearby extragalactic H II regions, we assess the performance of commonly used SEL abundance diagnostics. We also use a sample of 22 local H II regions having both direct and recombination line (RL) oxygen abundance measurements in the literature to study discrepancies in the abundance scale between different methods. We find that oxygen abundances derived through Bayesian inference using currently available photoionization models in the literature can be in good (∼30%) agreement with RL abundances, although some models perform significantly better than others. We also confirm that abundances measured using the direct method are typically ∼0.2 dex lower than both RL and photoionization-model-based abundances.

  19. Modular autopilot design and development featuring Bayesian non-parametric adaptive control

    NASA Astrophysics Data System (ADS)

    Stockton, Jacob

    Over the last few decades, Unmanned Aircraft Systems, or UAS, have become a critical part of the defense of our nation and the growth of the aerospace sector. UAS have a great potential for the agricultural industry, first response, and ecological monitoring. However, the wide range of applications require many mission-specific vehicle platforms. These platforms must operate reliably in a range of environments, and in presence of significant uncertainties. The accepted practice for enabling autonomously flying UAS today relies on extensive manual tuning of the UAS autopilot parameters, or time consuming approximate modeling of the dynamics of the UAS. These methods may lead to overly conservative controllers or excessive development times. A comprehensive approach to the development of an adaptive, airframe-independent controller is presented. The control algorithm leverages a nonparametric, Bayesian approach to adaptation, and is used as a cornerstone for the development of a new modular autopilot. Promising simulation results are presented for the adaptive controller, as well as, flight test results for the modular autopilot.

  20. Call Admission Control Scheme Based on Statistical Information

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takayuki; Oki, Eiji; Shiomoto, Kohei

    A call admission control (CAC) scheme based on statistical information is proposed, called the statistical CAC scheme. A conventional scheme needs to manage session information for each link to update the residual bandwidth of a network in real time. This scheme has a scalability problem in terms of network size. The statistical CAC rejects session setup requests in accordance to a pre-computed ratio, called the rejection ratio. The rejection ratio is computed by using statistical information about the bandwidth requested for each link so that the congestion probability is less than an upper bound specified by a network operator. The statistical CAC is more scalable in terms of network size than the conventional scheme because it does not need to keep accommodated session state information. Numerical results show that the statistical CAC, even without exact session state information, only slightly degrades network utilization compared with the conventional scheme.

  1. Estimability and simple dynamical analyses of range (range-rate range-difference) observations to artificial satellites. [laser range observations to LAGEOS using non-Bayesian statistics

    NASA Technical Reports Server (NTRS)

    Vangelder, B. H. W.

    1978-01-01

    Non-Bayesian statistics were used in simulation studies centered around laser range observations to LAGEOS. The capabilities of satellite laser ranging especially in connection with relative station positioning are evaluated. The satellite measurement system under investigation may fall short in precise determinations of the earth's orientation (precession and nutation) and earth's rotation as opposed to systems as very long baseline interferometry (VLBI) and lunar laser ranging (LLR). Relative station positioning, determination of (differential) polar motion, positioning of stations with respect to the earth's center of mass and determination of the earth's gravity field should be easily realized by satellite laser ranging (SLR). The last two features should be considered as best (or solely) determinable by SLR in contrast to VLBI and LLR.

  2. Epistatic Module Detection for Case-Control Studies: A Bayesian Model with a Gibbs Sampling Strategy

    PubMed Central

    Tang, Wanwan; Wu, Xuebing; Jiang, Rui; Li, Yanda

    2009-01-01

    The detection of epistatic interactive effects of multiple genetic variants on the susceptibility of human complex diseases is a great challenge in genome-wide association studies (GWAS). Although methods have been proposed to identify such interactions, the lack of an explicit definition of epistatic effects, together with computational difficulties, makes the development of new methods indispensable. In this paper, we introduce epistatic modules to describe epistatic interactive effects of multiple loci on diseases. On the basis of this notion, we put forward a Bayesian marker partition model to explain observed case-control data, and we develop a Gibbs sampling strategy to facilitate the detection of epistatic modules. Comparisons of the proposed approach with three existing methods on seven simulated disease models demonstrate the superior performance of our approach. When applied to a genome-wide case-control data set for Age-related Macular Degeneration (AMD), the proposed approach successfully identifies two known susceptible loci and suggests that a combination of two other loci—one in the gene SGCD and the other in SCAPER—is associated with the disease. Further functional analysis supports the speculation that the interaction of these two genetic variants may be responsible for the susceptibility of AMD. When applied to a genome-wide case-control data set for Parkinson's disease, the proposed method identifies seven suspicious loci that may contribute independently to the disease. PMID:19412524

  3. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  4. Manufacturing Squares: An Integrative Statistical Process Control Exercise

    ERIC Educational Resources Information Center

    Coy, Steven P.

    2016-01-01

    In the exercise, students in a junior-level operations management class are asked to manufacture a simple product. Given product specifications, they must design a production process, create roles and design jobs for each team member, and develop a statistical process control plan that efficiently and effectively controls quality during…

  5. Using Paper Helicopters to Teach Statistical Process Control

    ERIC Educational Resources Information Center

    Johnson, Danny J.

    2011-01-01

    This hands-on project uses a paper helicopter to teach students how to distinguish between common and special causes of variability when developing and using statistical process control charts. It allows the student to experience a process that is out-of-control due to imprecise or incomplete product design specifications and to discover how the…

  6. Bayesian sample size determination for case-control studies when exposure may be misclassified.

    PubMed

    Joseph, Lawrence; Bélisle, Patrick

    2013-12-01

    Odds ratios are frequently used for estimating the effect of an exposure on the probability of disease in case-control studies. In planning such studies, methods for sample size determination are required to ensure sufficient accuracy in estimating odds ratios once the data are collected. Often, the exposure used in epidemiologic studies is not perfectly ascertained. This can arise from recall bias, the use of a proxy exposure measurement, uncertain work exposure history, and laboratory or other errors. The resulting misclassification can have large impacts on the accuracy and precision of estimators, and specialized estimation techniques have been developed to adjust for these biases. However, much less work has been done to account for the anticipated decrease in the precision of estimators at the design stage. Here, we develop methods for sample size determination for odds ratios in the presence of exposure misclassification by using several interval-based Bayesian criteria. By using a series of prototypical examples, we compare sample size requirements after adjustment for misclassification with those required when this problem is ignored. We illustrate the methods by planning a case-control study of the effect of late introduction of peanut to the diet of children to the subsequent development of peanut allergy.

  7. Statistical Design Model (SDM) of satellite thermal control subsystem

    NASA Astrophysics Data System (ADS)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  8. Statistical Process Control Charts for Public Health Monitoring

    DTIC Science & Technology

    2014-12-01

    process performance, remove existing sources of natural and unnatural variability, and identify any new sources of variability [1]. Control charts are SPC...can be used and refined over time [4]. The causes of any Phase I points outside the established control limits should be investigated. If the cause is...U.S. Army Public Health Command Statistical Process Control Charts for Public Health Monitoring PHR No. S.0023112 General Medical: 500A, Public

  9. Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation

    PubMed Central

    Bauer, Robert; Gharabaghi, Alireza

    2015-01-01

    Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting. PMID:25729347

  10. Bayesian Approach to Dynamically Controlling Data Collection in P300 Spellers

    PubMed Central

    Throckmorton, Chandra S.; Colwell, Kenneth A.; Ryan, David B.; Sellers, Eric W.; Collins, Leslie M.

    2013-01-01

    P300 spellers provide a noninvasive method of communication for people who may not be able to use other communication aids due to severe neuromuscular disabilities. However, P300 spellers rely on event-related potentials (ERPs) which often have low signal-to-noise ratios (SNRs). In order to improve detection of the ERPs, P300 spellers typically collect multiple measurements of the electroencephalography (EEG) response for each character. The amount of collected data can affect both the accuracy and the communication rate of the speller system. The goal of the present study was to develop an algorithm that would automatically determine the necessary amount of data to collect during operation. Dynamic data collection was controlled by a threshold on the probabilities that each possible character was the target character, and these probabilities were continually updated with each additional measurement. This Bayesian technique differs from other dynamic data collection techniques by relying on a participant-independent, probability-based metric as the stopping criterion. The accuracy and communication rate for dynamic and static data collection in P300 spellers were compared for 26 users. Dynamic data collection resulted in a significant increase in accuracy and communication rate. PMID:23529202

  11. Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation.

    PubMed

    Bauer, Robert; Gharabaghi, Alireza

    2015-01-01

    Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

  12. Robust bayesian sensitivity analysis for case-control studies with uncertain exposure misclassification probabilities.

    PubMed

    Mak, Timothy Shin Heng; Best, Nicky; Rushton, Lesley

    2015-05-01

    Exposure misclassification in case-control studies leads to bias in odds ratio estimates. There has been considerable interest recently to account for misclassification in estimation so as to adjust for bias as well as more accurately quantify uncertainty. These methods require users to elicit suitable values or prior distributions for the misclassification probabilities. In the event where exposure misclassification is highly uncertain, these methods are of limited use, because the resulting posterior uncertainty intervals tend to be too wide to be informative. Posterior inference also becomes very dependent on the subjectively elicited prior distribution. In this paper, we propose an alternative "robust Bayesian" approach, where instead of eliciting prior distributions for the misclassification probabilities, a feasible region is given. The extrema of posterior inference within the region are sought using an inequality constrained optimization algorithm. This method enables sensitivity analyses to be conducted in a useful way as we do not need to restrict all of our unknown parameters to fixed values, but can instead consider ranges of values at a time.

  13. Real-time statistical quality control and ARM

    SciTech Connect

    Blough, D.K.

    1992-05-01

    An important component of the Atmospheric Radiation Measurement (ARM) Program is real-time quality control of data obtained from meteorological instruments. It is the goal of the ARM program to enhance the predictive capabilities of global circulation models by incorporating in them more detailed information on the radiative characteristics of the earth`s atmosphere. To this end, a number of Cloud and Radiation Testbeds (CART`s) will be built at various locations worldwide. Each CART will consist of an array of instruments designed to collect radiative data. The large amount of data obtained from these instruments necessitates real-time processing in order to flag outliers and possible instrument malfunction. The Bayesian dynamic linear model (DLM) proves to be an effective way of monitoring the time series data which each instrument generates. It provides a flexible yet powerful approach to detecting in real-time sudden shifts in a non-stationary multivariate time series. An application of these techniques to data arising from a remote sensing instrument to be used in the CART is provided. Using real data from a wind profiler, the ability of the DLM to detect outliers is studied. 5 refs.

  14. Real-time statistical quality control and ARM

    SciTech Connect

    Blough, D.K.

    1992-05-01

    An important component of the Atmospheric Radiation Measurement (ARM) Program is real-time quality control of data obtained from meteorological instruments. It is the goal of the ARM program to enhance the predictive capabilities of global circulation models by incorporating in them more detailed information on the radiative characteristics of the earth's atmosphere. To this end, a number of Cloud and Radiation Testbeds (CART's) will be built at various locations worldwide. Each CART will consist of an array of instruments designed to collect radiative data. The large amount of data obtained from these instruments necessitates real-time processing in order to flag outliers and possible instrument malfunction. The Bayesian dynamic linear model (DLM) proves to be an effective way of monitoring the time series data which each instrument generates. It provides a flexible yet powerful approach to detecting in real-time sudden shifts in a non-stationary multivariate time series. An application of these techniques to data arising from a remote sensing instrument to be used in the CART is provided. Using real data from a wind profiler, the ability of the DLM to detect outliers is studied. 5 refs.

  15. Statistical porcess control in Deep Space Network operation

    NASA Technical Reports Server (NTRS)

    Hodder, J. A.

    2002-01-01

    This report describes how the Deep Space Mission System (DSMS) Operations Program Office at the Jet Propulsion Laboratory's (EL) uses Statistical Process Control (SPC) to monitor performance and evaluate initiatives for improving processes on the National Aeronautics and Space Administration's (NASA) Deep Space Network (DSN).

  16. Bayesian analysis of genetic interactions in case-control studies, with application to adiponectin genes and colorectal cancer risk.

    PubMed

    Yi, Nengjun; Kaklamani, Virginia G; Pasche, Boris

    2011-01-01

    Complex diseases such as cancers are influenced by interacting networks of genetic and environmental factors. However, a joint analysis of multiple genes and environmental factors is challenging, owing to potentially large numbers of correlated and complex variables. We describe Bayesian generalized linear models for simultaneously analyzing covariates, main effects of numerous loci, gene-gene and gene-environment interactions in population case-control studies. Our Bayesian models use Student-t prior distributions with different shrinkage parameters for different types of effects, allowing reliable estimates of main effects and interactions and hence increasing the power for detection of real signals. We implement a fast and stable algorithm for fitting models by extending available tools for classical generalized linear models to the Bayesian case. We propose a novel method to interpret and visualize models with multiple interactions by computing the average predictive probability. Simulations show that the method has the potential to dissect interacting networks of complex diseases. Application of the method to a large case-control study of adiponectin genes and colorectal cancer risk highlights the previous results and detects new epistatic interactions and sex-specific effects that warrant follow-up in independent studies.

  17. A Statistical Project Control Tool for Engineering Managers

    NASA Technical Reports Server (NTRS)

    Bauch, Garland T.

    2001-01-01

    This slide presentation reviews the use of a Statistical Project Control Tool (SPCT) for managing engineering projects. A literature review pointed to a definition of project success, (i.e., A project is successful when the cost, schedule, technical performance, and quality satisfy the customer.) The literature review also pointed to project success factors, and traditional project control tools, and performance measures that are detailed in the report. The essential problem is that with resources becoming more limited, and an increasing number or projects, project failure is increasing, there is a limitation of existing methods and systematic methods are required. The objective of the work is to provide a new statistical project control tool for project managers. Graphs using the SPCT method plotting results of 3 successful projects and 3 failed projects are reviewed, with success and failure being defined by the owner.

  18. Bayesian evaluation of budgets for endemic disease control: An example using management changes to reduce milk somatic cell count early in the first lactation of Irish dairy cows.

    PubMed

    Archer, S C; Mc Coy, F; Wapenaar, W; Green, M J

    2014-01-01

    The aim of this research was to determine budgets for specific management interventions to control heifer mastitis in Irish dairy herds as an example of evidence synthesis and 1-step Bayesian micro-simulation in a veterinary context. Budgets were determined for different decision makers based on their willingness to pay. Reducing the prevalence of heifers with a high milk somatic cell count (SCC) early in the first lactation could be achieved through herd level management interventions for pre- and peri-partum heifers, however the cost effectiveness of these interventions is unknown. A synthesis of multiple sources of evidence, accounting for variability and uncertainty in the available data is invaluable to inform decision makers around likely economic outcomes of investing in disease control measures. One analytical approach to this is Bayesian micro-simulation, where the trajectory of different individuals undergoing specific interventions is simulated. The classic micro-simulation framework was extended to encompass synthesis of evidence from 2 separate statistical models and previous research, with the outcome for an individual cow or herd assessed in terms of changes in lifetime milk yield, disposal risk, and likely financial returns conditional on the interventions being simultaneously applied. The 3 interventions tested were storage of bedding inside, decreasing transition yard stocking density, and spreading of bedding evenly in the calving area. Budgets for the interventions were determined based on the minimum expected return on investment, and the probability of the desired outcome. Budgets for interventions to control heifer mastitis were highly dependent on the decision maker's willingness to pay, and hence minimum expected return on investment. Understanding the requirements of decision makers and their rational spending limits would be useful for the development of specific interventions for particular farms to control heifer mastitis, and other

  19. Preliminary statistical assessment towards characterization of biobotic control.

    PubMed

    Latif, Tahmid; Meng Yang; Lobaton, Edgar; Bozkurt, Alper

    2016-08-01

    Biobotic research involving neurostimulation of instrumented insects to control their locomotion is finding potential as an alternative solution towards development of centimeter-scale distributed swarm robotics. To improve the reliability of biobotic agents, their control mechanism needs to be precisely characterized. To achieve this goal, this paper presents our initial efforts for statistical assessment of the angular response of roach biobots to the applied bioelectrical stimulus. Subsequent findings can help to understand the effect of each stimulation parameter individually or collectively and eventually reach reliable and consistent biobotic control suitable for real life scenarios.

  20. Statistical inference in behavior analysis: Experimental control is better

    PubMed Central

    Perone, Michael

    1999-01-01

    Statistical inference promises automatic, objective, reliable assessments of data, independent of the skills or biases of the investigator, whereas the single-subject methods favored by behavior analysts often are said to rely too much on the investigator's subjective impressions, particularly in the visual analysis of data. In fact, conventional statistical methods are difficult to apply correctly, even by experts, and the underlying logic of null-hypothesis testing has drawn criticism since its inception. By comparison, single-subject methods foster direct, continuous interaction between investigator and subject and development of strong forms of experimental control that obviate the need for statistical inference. Treatment effects are demonstrated in experimental designs that incorporate replication within and between subjects, and the visual analysis of data is adequate when integrated into such designs. Thus, single-subject methods are ideal for shaping—and maintaining—the kind of experimental practices that will ensure the continued success of behavior analysis. PMID:22478328

  1. Statistical physics of human beings in games: Controlled experiments

    NASA Astrophysics Data System (ADS)

    Liang, Yuan; Huang, Ji-Ping

    2014-07-01

    It is important to know whether the laws or phenomena in statistical physics for natural systems with non-adaptive agents still hold for social human systems with adaptive agents, because this implies whether it is possible to study or understand social human systems by using statistical physics originating from natural systems. For this purpose, we review the role of human adaptability in four kinds of specific human behaviors, namely, normal behavior, herd behavior, contrarian behavior, and hedge behavior. The approach is based on controlled experiments in the framework of market-directed resource-allocation games. The role of the controlled experiments could be at least two-fold: adopting the real human decision-making process so that the system under consideration could reflect the performance of genuine human beings; making it possible to obtain macroscopic physical properties of a human system by tuning a particular factor of the system, thus directly revealing cause and effect. As a result, both computer simulations and theoretical analyses help to show a few counterparts of some laws or phenomena in statistical physics for social human systems: two-phase phenomena or phase transitions, entropy-related phenomena, and a non-equilibrium steady state. This review highlights the role of human adaptability in these counterparts, and makes it possible to study or understand some particular social human systems by means of statistical physics coming from natural systems.

  2. CRN5EXP: Expert system for statistical quality control

    NASA Technical Reports Server (NTRS)

    Hentea, Mariana

    1991-01-01

    The purpose of the Expert System CRN5EXP is to assist in checking the quality of the coils at two very important mills: Hot Rolling and Cold Rolling in a steel plant. The system interprets the statistical quality control charts, diagnoses and predicts the quality of the steel. Measurements of process control variables are recorded in a database and sample statistics such as the mean and the range are computed and plotted on a control chart. The chart is analyzed through patterns using the C Language Integrated Production System (CLIPS) and a forward chaining technique to reach a conclusion about the causes of defects and to take management measures for the improvement of the quality control techniques. The Expert System combines the certainty factors associated with the process control variables to predict the quality of the steel. The paper presents the approach to extract data from the database, the reason to combine certainty factors, the architecture and the use of the Expert System. However, the interpretation of control charts patterns requires the human expert's knowledge and lends to Expert Systems rules.

  3. Advanced statistics: applying statistical process control techniques to emergency medicine: a primer for providers.

    PubMed

    Callahan, Charles D; Griffen, David L

    2003-08-01

    Emergency medicine faces unique challenges in the effort to improve efficiency and effectiveness. Increased patient volumes, decreased emergency department (ED) supply, and an increased emphasis on the ED as a diagnostic center have contributed to poor customer satisfaction and process failures such as diversion/bypass. Statistical process control (SPC) techniques developed in industry offer an empirically based means to understand our work processes and manage by fact. Emphasizing that meaningful quality improvement can occur only when it is exercised by "front-line" providers, this primer presents robust yet accessible SPC concepts and techniques for use in today's ED.

  4. A statistical combustion phase control approach of SI engines

    NASA Astrophysics Data System (ADS)

    Gao, Jinwu; Wu, Yuhu; Shen, Tielong

    2017-02-01

    In order to maximize the performance of internal combustion engine, combustion phase is usually controlled to track its desired reference. However, suffering from the cyclic variability of combustion, it is difficulty but meaningful to control mean of combustion phase and constrain its variance. As a combustion phase indicator, the location of peak pressure (LPP) is utilized for real-time combustion phase control in this research. The purpose of the proposed method is to ensure the mean of LPP statistically tracks its reference and constrains the standard deviation of LPP distribution. To achieve this, LPP is first calculated based on the cylinder pressure sensor, and its characteristics are analyzed at the steady-state operating condition, then the distribution of LPP is examined online using hypothesis test criterion. On the basis of the presented statistical algorithm, current mean of LPP is applied in the feedback channel for designing spark advance adjustment law, and the stability of closed-loop system is theoretically ensured according to a steady statistical model. Finally, the proposed strategy is verified on a spark ignition gasoline engine.

  5. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  6. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  7. Statistical Quality Control of Moisture Data in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Rukhovets, L.; Todling, R.

    1999-01-01

    A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.

  8. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  9. Bayesian estimation of prevalence of paratuberculosis in dairy herds enrolled in a voluntary Johne's Disease Control Programme in Ireland.

    PubMed

    McAloon, Conor G; Doherty, Michael L; Whyte, Paul; O'Grady, Luke; More, Simon J; Messam, Locksley L McV; Good, Margaret; Mullowney, Peter; Strain, Sam; Green, Martin J

    2016-06-01

    Bovine paratuberculosis is a disease characterised by chronic granulomatous enteritis which manifests clinically as a protein-losing enteropathy causing diarrhoea, hypoproteinaemia, emaciation and, eventually death. Some evidence exists to suggest a possible zoonotic link and a national voluntary Johne's Disease Control Programme was initiated by Animal Health Ireland in 2013. The objective of this study was to estimate herd-level true prevalence (HTP) and animal-level true prevalence (ATP) of paratuberculosis in Irish herds enrolled in the national voluntary JD control programme during 2013-14. Two datasets were used in this study. The first dataset had been collected in Ireland during 2005 (5822 animals from 119 herds), and was used to construct model priors. Model priors were updated with a primary (2013-14) dataset which included test records from 99,101 animals in 1039 dairy herds and was generated as part of the national voluntary JD control programme. The posterior estimate of HTP from the final Bayesian model was 0.23-0.34 with a 95% probability. Across all herds, the median ATP was found to be 0.032 (0.009, 0.145). This study represents the first use of Bayesian methodology to estimate the prevalence of paratuberculosis in Irish dairy herds. The HTP estimate was higher than previous Irish estimates but still lower than estimates from other major dairy producing countries.

  10. The Bayesian Inventory Problem

    DTIC Science & Technology

    1984-05-01

    Bayesian Approach to Demand Estimation and Inventory Provisioning," Naval Research Logistics Quarterly. Vol 20, 1973, (p607-624). 4 DeGroot , Morris H...page is blank APPENDIX A SUFFICIENT STATISTICS A convenient reference for moat of this material is DeGroot (41. Su-pose that we are sampling from a

  11. Statistical process control of a Kalman filter model.

    PubMed

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A

    2014-09-26

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations.

  12. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    ERIC Educational Resources Information Center

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A. G.

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are…

  13. Bayesian neural adjustment of inhibitory control predicts emergence of problem stimulant use

    PubMed Central

    Stewart, Jennifer L.; Zhang, Shunan; Tapert, Susan F.; Yu, Angela J.; Paulus, Martin P.

    2015-01-01

    Bayesian ideal observer models quantify individuals’ context- and experience-dependent beliefs and expectations about their environment, which provides a powerful approach (i) to link basic behavioural mechanisms to neural processing; and (ii) to generate clinical predictors for patient populations. Here, we focus on (ii) and determine whether individual differences in the neural representation of the need to stop in an inhibitory task can predict the development of problem use (i.e. abuse or dependence) in individuals experimenting with stimulants. One hundred and fifty-seven non-dependent occasional stimulant users, aged 18–24, completed a stop-signal task while undergoing functional magnetic resonance imaging. These individuals were prospectively followed for 3 years and evaluated for stimulant use and abuse/dependence symptoms. At follow-up, 38 occasional stimulant users met criteria for a stimulant use disorder (problem stimulant users), while 50 had discontinued use (desisted stimulant users). We found that those individuals who showed greater neural responses associated with Bayesian prediction errors, i.e. the difference between actual and expected need to stop on a given trial, in right medial prefrontal cortex/anterior cingulate cortex, caudate, anterior insula, and thalamus were more likely to exhibit problem use 3 years later. Importantly, these computationally based neural predictors outperformed clinical measures and non-model based neural variables in predicting clinical status. In conclusion, young adults who show exaggerated brain processing underlying whether to ‘stop’ or to ‘go’ are more likely to develop stimulant abuse. Thus, Bayesian cognitive models provide both a computational explanation and potential predictive biomarkers of belief processing deficits in individuals at risk for stimulant addiction. PMID:26336910

  14. A Bayesian Semiparametric Approach for Incorporating Longitudinal Information on Exposure History for Inference in Case-Control Studies

    PubMed Central

    Bhadra, Dhiman; Daniels, Michael J.; Kim, Sungduk; Ghosh, Malay; Mukherjee, Bhramar

    2014-01-01

    In a typical case-control study, exposure information is collected at a single time-point for the cases and controls. However, case-control studies are often embedded in existing cohort studies containing a wealth of longitudinal exposure history on the participants. Recent medical studies have indicated that incorporating past exposure history, or a constructed summary measure of cumulative exposure derived from the past exposure history, when available, may lead to more precise and clinically meaningful estimates of the disease risk. In this paper, we propose a flexible Bayesian semiparametric approach to model the longitudinal exposure profiles of the cases and controls and then use measures of cumulative exposure based on a weighted integral of this trajectory in the final disease risk model. The estimation is done via a joint likelihood. In the construction of the cumulative exposure summary, we introduce an influence function, a smooth function of time to characterize the association pattern of the exposure profile on the disease status with different time windows potentially having differential influence/weights. This enables us to analyze how the present disease status of a subject is influenced by his/her past exposure history conditional on the current ones. The joint likelihood formulation allows us to properly account for uncertainties associated with both stages of the estimation process in an integrated manner. Analysis is carried out in a hierarchical Bayesian framework using Reversible jump Markov chain Monte Carlo (RJMCMC) algorithms. The proposed methodology is motivated by, and applied to a case-control study of prostate cancer where longitudinal biomarker information is available for the cases and controls. PMID:22313248

  15. Bayesian demography 250 years after Bayes

    PubMed Central

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889

  16. BIE: Bayesian Inference Engine

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.

    2013-12-01

    The Bayesian Inference Engine (BIE) is an object-oriented library of tools written in C++ designed explicitly to enable Bayesian update and model comparison for astronomical problems. To facilitate "what if" exploration, BIE provides a command line interface (written with Bison and Flex) to run input scripts. The output of the code is a simulation of the Bayesian posterior distribution from which summary statistics e.g. by taking moments, or determine confidence intervals and so forth, can be determined. All of these quantities are fundamentally integrals and the Markov Chain approach produces variates heta distributed according to P( heta|D) so moments are trivially obtained by summing of the ensemble of variates.

  17. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  18. LOWER LEVEL INFERENCE CONTROL IN STATISTICAL DATABASE SYSTEMS

    SciTech Connect

    Lipton, D.L.; Wong, H.K.T.

    1984-02-01

    An inference is the process of transforming unclassified data values into confidential data values. Most previous research in inference control has studied the use of statistical aggregates to deduce individual records. However, several other types of inference are also possible. Unknown functional dependencies may be apparent to users who have 'expert' knowledge about the characteristics of a population. Some correlations between attributes may be concluded from 'commonly-known' facts about the world. To counter these threats, security managers should use random sampling of databases of similar populations, as well as expert systems. 'Expert' users of the DATABASE SYSTEM may form inferences from the variable performance of the user interface. Users may observe on-line turn-around time, accounting statistics. the error message received, and the point at which an interactive protocol sequence fails. One may obtain information about the frequency distributions of attribute values, and the validity of data object names from this information. At the back-end of a database system, improved software engineering practices will reduce opportunities to bypass functional units of the database system. The term 'DATA OBJECT' should be expanded to incorporate these data object types which generate new classes of threats. The security of DATABASES and DATABASE SySTEMS must be recognized as separate but related problems. Thus, by increased awareness of lower level inferences, system security managers may effectively nullify the threat posed by lower level inferences.

  19. The influence of control group reproduction on the statistical ...

    EPA Pesticide Factsheets

    Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of breeding pairs of medaka. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) will have on the statistical power of the test. A software tool, the MEOGRT Reproduction Power Analysis Tool, was developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. The manuscript illustrates how the reproductive performance of the control medaka that are used in a MEOGRT influence statistical power, and therefore the successful implementation of the protocol. Example scenarios, based upon medaka reproduction data collected at MED, are discussed that bolster the recommendation that facilities planning to implement the MEOGRT should have a culture of medaka with hi

  20. Advances in Bayesian Modeling in Educational Research

    ERIC Educational Resources Information Center

    Levy, Roy

    2016-01-01

    In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…

  1. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  2. Bayesian Inference: with ecological applications

    USGS Publications Warehouse

    Link, William A.; Barker, Richard J.

    2010-01-01

    This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.

  3. Impact angle control of interplanetary shock geoeffectiveness: A statistical study

    NASA Astrophysics Data System (ADS)

    Oliveira, Denny M.; Raeder, Joachim

    2015-06-01

    We present a survey of interplanetary (IP) shocks using Wind and ACE satellite data from January 1995 to December 2013 to study how IP shock geoeffectiveness is controlled by IP shock impact angles. A shock list covering one and a half solar cycle is compiled. The yearly number of IP shocks is found to correlate well with the monthly sunspot number. We use data from SuperMAG, a large chain with more than 300 geomagnetic stations, to study geoeffectiveness triggered by IP shocks. The SuperMAG SML index, an enhanced version of the familiar AL index, is used in our statistical analysis. The jumps of the SML index triggered by IP shock impacts on the Earth's magnetosphere are investigated in terms of IP shock orientation and speed. We find that, in general, strong (high speed) and almost frontal (small impact angle) shocks are more geoeffective than inclined shocks with low speed. The strongest correlation (correlation coefficient R = 0.78) occurs for fixed IP shock speed and for varied IP shock impact angle. We attribute this result, predicted previously with simulations, to the fact that frontal shocks compress the magnetosphere symmetrically from all sides, which is a favorable condition for the release of magnetic energy stored in the magnetotail, which in turn can produce moderate to strong auroral substorms, which are then observed by ground-based magnetometers.

  4. Propagation of Bayesian Belief for Near-Real Time Statistical Assessment of Geosynchronous Satellite Status Based on Non-Resolved Photometry Data

    DTIC Science & Technology

    2014-09-01

    Kirtland AFB, Albuquerque, NM 87117 ABSTRACT The objective of Bayesian belief propagation in this paper is to perform an interactive status...Vehicles Directorate, Kirtland AFB,NM,87117 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR...ACKNOWLEDGEMENT This work was sponsored by the Air Force Research Laboratory, Space Vehicles Directorate ( AFRL /RV). Its foundation is the support

  5. A Bayesian framework to assess the potential for controlling classical scrapie in sheep flocks using a live diagnostic test.

    PubMed

    Gryspeirt, Aiko; Gubbins, Simon

    2013-09-01

    Current strategies to control classical scrapie remove animals at risk of scrapie rather than those known to be infected with the scrapie agent. Advances in diagnostic tests, however, suggest that a more targeted approach involving the application of a rapid live test may be feasible in future. Here we consider the use of two diagnostic tests: recto-anal mucosa-associated lymphatic tissue (RAMALT) biopsies; and a blood-based assay. To assess their impact we developed a stochastic age- and prion protein (PrP) genotype-structured model for the dynamics of scrapie within a sheep flock. Parameters were estimated in a Bayesian framework to facilitate integration of a number of disparate datasets and to allow parameter uncertainty to be incorporated in model predictions. In small flocks a control strategy based on removal of clinical cases was sufficient to control disease and more stringent measures (including the use of a live diagnostic test) did not significantly reduce outbreak size or duration. In medium or large flocks strategies in which a large proportion of animals are tested with either live diagnostic test significantly reduced outbreak size, but not always duration, compared with removal of clinical cases. However, the current Compulsory Scrapie Flocks Scheme (CSFS) significantly reduced outbreak size and duration compared with both removal of clinical cases and all strategies using a live diagnostic test. Accordingly, under the assumptions made in the present study there is little benefit from implementing a control strategy which makes use of a live diagnostic test.

  6. Bayesian approach to non-inferiority trials for normal means.

    PubMed

    Gamalo, M Amper; Wu, Rui; Tiwari, Ram C

    2016-02-01

    Regulatory framework recommends that novel statistical methodology for analyzing trial results parallels the frequentist strategy, e.g. the new method must protect type-I error and arrive at a similar conclusion. Keeping these in mind, we construct a Bayesian approach for non-inferiority trials with normal response. A non-informative prior is assumed for the mean response of the experimental treatment and Jeffrey's prior for its corresponding variance when it is unknown. The posteriors of the mean response and variance of the treatment in historical trials are then assumed as priors for its corresponding parameters in the current trial, where that treatment serves as the active control. From these priors, a Bayesian decision criterion is derived to determine whether the experimental treatment is non-inferior to the active control. This criterion is evaluated and compared with the frequentist method using simulation studies. Results show that both Bayesian and frequentist approaches perform alike, but the Bayesian approach has a higher power when the variances are unknown. Both methods also arrive at the same conclusion of non-inferiority when applied on two real datasets. A major advantage of the proposed Bayesian approach lies in its ability to provide posterior probabilities for varying effect sizes of the experimental treatment over the active control.

  7. A Bayesian approach to strengthen inference for case-control studies with multiple error-prone exposure assessments.

    PubMed

    Zhang, Jing; Cole, Stephen R; Richardson, David B; Chu, Haitao

    2013-11-10

    In case-control studies, exposure assessments are almost always error-prone. In the absence of a gold standard, two or more assessment approaches are often used to classify people with respect to exposure. Each imperfect assessment tool may lead to misclassification of exposure assignment; the exposure misclassification may be differential with respect to case status or not; and, the errors in exposure classification under the different approaches may be independent (conditional upon the true exposure status) or not. Although methods have been proposed to study diagnostic accuracy in the absence of a gold standard, these methods are infrequently used in case-control studies to correct exposure misclassification that is simultaneously differential and dependent. In this paper, we proposed a Bayesian method to estimate the measurement-error corrected exposure-disease association, accounting for both differential and dependent misclassification. The performance of the proposed method is investigated using simulations, which show that the proposed approach works well, as well as an application to a case-control study assessing the association between asbestos exposure and mesothelioma.

  8. 77 FR 46096 - Statistical Process Controls for Blood Establishments; Public Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-02

    ...: Statistical process control is the application of statistical methods to the monitoring, or quality control... monitors manufacturing procedures, validation summaries, and quality control data prior to licensure and... at implementation and then monitor these processes on a regular basis, using quality control...

  9. A Total Quality-Control Plan with Right-Sized Statistical Quality-Control.

    PubMed

    Westgard, James O

    2017-03-01

    A new Clinical Laboratory Improvement Amendments option for risk-based quality-control (QC) plans became effective in January, 2016. Called an Individualized QC Plan, this option requires the laboratory to perform a risk assessment, develop a QC plan, and implement a QC program to monitor ongoing performance of the QC plan. Difficulties in performing a risk assessment may limit validity of an Individualized QC Plan. A better alternative is to develop a Total QC Plan including a right-sized statistical QC procedure to detect medically important errors. Westgard Sigma Rules provides a simple way to select the right control rules and the right number of control measurements.

  10. A Goal-Directed Bayesian Framework for Categorization

    PubMed Central

    Rigoli, Francesco; Pezzulo, Giovanni; Dolan, Raymond; Friston, Karl

    2017-01-01

    Categorization is a fundamental ability for efficient behavioral control. It allows organisms to remember the correct responses to categorical cues and not for every stimulus encountered (hence eluding computational cost or complexity), and to generalize appropriate responses to novel stimuli dependant on category assignment. Assuming the brain performs Bayesian inference, based on a generative model of the external world and future goals, we propose a computational model of categorization in which important properties emerge. These properties comprise the ability to infer latent causes of sensory experience, a hierarchical organization of latent causes, and an explicit inclusion of context and action representations. Crucially, these aspects derive from considering the environmental statistics that are relevant to achieve goals, and from the fundamental Bayesian principle that any generative model should be preferred over alternative models based on an accuracy-complexity trade-off. Our account is a step toward elucidating computational principles of categorization and its role within the Bayesian brain hypothesis. PMID:28382008

  11. Performance Monitoring and Assessment of Neuro-Adaptive Controllers for Aerospace Applications Using a Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Guenther, Kurt; Hodgkinson, John; Jacklin, Stephen; Richard, Michael; Schumann, Johann; Soares, Fola

    2005-01-01

    Modern exploration missions require modern control systems-control systems that can handle catastrophic changes in the system's behavior, compensate for slow deterioration in sustained operations, and support fast system ID. Adaptive controllers, based upon Neural Networks have these capabilities, but they can only be used safely if proper verification & validation (V&V) can be done. In this paper we present our V & V approach and simulation result within NASA's Intelligent Flight Control Systems (IFCS).

  12. Wholesale Warehouse Inventory Control with Statistical Demand Information.

    DTIC Science & Technology

    1980-12-01

    following pages. Harvey M. Wagner Principal investigator Richard Ehrhardt Co-Principal Investigator I|.4J I]I . . MacCormick , A. (1974), Statistical...Analysis, Uni- versity of North Carolina at Chapel Hill, 109 pp. Schultz, C. R., R. Ehrhardt, and A. MacCormick (1977), Forecasting Operating...b t-b) t=l of the time series. Previous forecasting studies [ MacCormick (1974), Estey and Kaufman (1975), Ehrhardt (1976), and Kaufman (1977)] have

  13. Continuous safety monitoring for randomized controlled clinical trials with blinded treatment information. Part 2: Statistical considerations.

    PubMed

    Ball, Greg; Piller, Linda B

    2011-09-01

    If the primary objective of a trial is to learn about the ability of a new treatment to help future patients without sacrificing the safe and effective treatment of the current patients, then a Bayesian design with frequent assessments of the accumulating data should be considered. Unfortunately, Bayesian analyses typically do not have standard approaches, and because of the subjectivity of prior probabilities and the possibility for introducing bias, statisticians have developed other methods for statistical inference that only depend on deductive probabilities. However, these frequentist probabilities are just theories about how certain relative frequencies will develop over time. They have no real meaning in a single experiment. Designed to work well in the long run, p-values become hard to explain for individual experiments. Fortunately, the controversy surrounding Bayes' theorem comes, not from the representation of evidence, but from the use of probabilities to measure belief. A prior distribution is not necessary. The likelihood function contains all of the information in a trial relevant for making inferences about the parameters. Monitoring clinical trials is a dynamic process which requires flexibility to respond to unforeseen developments. Likelihood ratios allow the data to speak for themselves, without regard for the probability of observing weak or misleading evidence, and decisions to stop, or continue, a trial can be made at any time, with all of the available information. A likelihood based method is needed.

  14. Prior approval: the growth of Bayesian methods in psychology.

    PubMed

    Andrews, Mark; Baguley, Thom

    2013-02-01

    Within the last few years, Bayesian methods of data analysis in psychology have proliferated. In this paper, we briefly review the history or the Bayesian approach to statistics, and consider the implications that Bayesian methods have for the theory and practice of data analysis in psychology.

  15. Performance Monitoring and Assessment of Neuro-Adaptive Controllers for Aerospace Applications Using a Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Jacklin, Stephen; Schumann, Johann; Guenther, Kurt; Richard, Michael; Soares, Fola

    2005-01-01

    Modem aircraft, UAVs, and robotic spacecraft pose substantial requirements on controllers in the light of ever increasing demands for reusability, affordability, and reliability. The individual systems (which are often nonlinear) must be controlled safely and reliably in environments where it is virtually impossible to analyze-ahead of time- all the important and possible scenarios and environmental factors. For example, system components (e.g., gyros, bearings of reaction wheels, valves) may deteriorate or break during autonomous UAV operation or long-lasting space missions, leading to a sudden, drastic change in vehicle performance. Manual repair or replacement is not an option in such cases. Instead, the system must be able to cope with equipment failure and deterioration. Controllability of the system must be retained as good as possible or re-established as fast as possible with a minimum of deactivation or shutdown of the system being controlled. In such situations the control engineer has to employ adaptive control systems that automatically sense and correct themselves whenever drastic disturbances and/or severe changes in the plant or environment occur.

  16. Bayesian computation via empirical likelihood

    PubMed Central

    Mengersen, Kerrie L.; Pudlo, Pierre; Robert, Christian P.

    2013-01-01

    Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models. PMID:23297233

  17. Statistical process control (SPC) for coordinate measurement machines

    SciTech Connect

    Escher, R.N.

    2000-01-04

    The application of process capability analysis, using designed experiments, and gage capability studies as they apply to coordinate measurement machine (CMM) uncertainty analysis and control will be demonstrated. The use of control standards in designed experiments, and the use of range charts and moving range charts to separate measurement error into it's discrete components will be discussed. The method used to monitor and analyze the components of repeatability and reproducibility will be presented with specific emphasis on how to use control charts to determine and monitor CMM performance and capability, and stay within your uncertainty assumptions.

  18. Bayesian Analysis of Individual Level Personality Dynamics

    PubMed Central

    Cripps, Edward; Wood, Robert E.; Beckmann, Nadin; Lau, John; Beckmann, Jens F.; Cripps, Sally Ann

    2016-01-01

    A Bayesian technique with analyses of within-person processes at the level of the individual is presented. The approach is used to examine whether the patterns of within-person responses on a 12-trial simulation task are consistent with the predictions of ITA theory (Dweck, 1999). ITA theory states that the performance of an individual with an entity theory of ability is more likely to spiral down following a failure experience than the performance of an individual with an incremental theory of ability. This is because entity theorists interpret failure experiences as evidence of a lack of ability which they believe is largely innate and therefore relatively fixed; whilst incremental theorists believe in the malleability of abilities and interpret failure experiences as evidence of more controllable factors such as poor strategy or lack of effort. The results of our analyses support ITA theory at both the within- and between-person levels of analyses and demonstrate the benefits of Bayesian techniques for the analysis of within-person processes. These include more formal specification of the theory and the ability to draw inferences about each individual, which allows for more nuanced interpretations of individuals within a personality category, such as differences in the individual probabilities of spiraling. While Bayesian techniques have many potential advantages for the analyses of processes at the level of the individual, ease of use is not one of them for psychologists trained in traditional frequentist statistical techniques. PMID:27486415

  19. Bayesian Comparison of Two Regression Lines.

    ERIC Educational Resources Information Center

    Tsutakawa, Robert K.

    1978-01-01

    A Bayesian solution is presented for the Johnson-Neyman problem (whether or not the distance between two regression lines is statistically significant over a finite interval of the independent variable). (Author/CTM)

  20. Statistical methodologies for the control of dynamic remapping

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Nicol, D. M.

    1986-01-01

    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.

  1. Statistical analysis of static shape control in space structures

    NASA Technical Reports Server (NTRS)

    Burdisso, Ricardo A.; Haftka, Raphael T.

    1990-01-01

    The article addresses the problem of efficient analysis of the statistics of initial and corrected shape distortions in space structures. Two approaches for improving efficiency are considered. One is an adjoint technique for calculating distortion shapes: the second is a modal expansion of distortion shapes in terms of pseudo-vibration modes. The two techniques are applied to the problem of optimizing actuator locations on a 55 m radiometer antenna. The adjoint analysis technique is used with a discrete-variable optimization method. The modal approximation technique is coupled with a standard conjugate-gradient continuous optimization method. The agreement between the two sets of results is good, validating both the approximate analysis and optimality of the results.

  2. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    PubMed Central

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B; Neyer, Franz J; van Aken, Marcel AG

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided. PMID:24116396

  3. Bayesian stable isotope mixing models

    EPA Science Inventory

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  4. Statistical Process Control Charts for Measuring and Monitoring Temporal Consistency of Ratings

    ERIC Educational Resources Information Center

    Omar, M. Hafidz

    2010-01-01

    Methods of statistical process control were briefly investigated in the field of educational measurement as early as 1999. However, only the use of a cumulative sum chart was explored. In this article other methods of statistical quality control are introduced and explored. In particular, methods in the form of Shewhart mean and standard deviation…

  5. Bayesian Integrated Microbial Forensics

    SciTech Connect

    Jarman, Kristin H.; Kreuzer-Martin, Helen W.; Wunschel, David S.; Valentine, Nancy B.; Cliff, John B.; Petersen, Catherine E.; Colburn, Heather A.; Wahl, Karen L.

    2008-06-01

    In the aftermath of the 2001 anthrax letters, researchers have been exploring ways to predict the production environment of unknown source microorganisms. Different mass spectral techniques are being developed to characterize components of a microbe’s culture medium including water, carbon and nitrogen sources, metal ions added, and the presence of agar. Individually, each technique has the potential to identify one or two ingredients in a culture medium recipe. However, by integrating data from multiple mass spectral techniques, a more complete characterization is possible. We present a Bayesian statistical approach to integrated microbial forensics and illustrate its application on spores grown in different culture media.

  6. Application of Fragment Ion Information as Further Evidence in Probabilistic Compound Screening Using Bayesian Statistics and Machine Learning: A Leap Toward Automation.

    PubMed

    Woldegebriel, Michael; Zomer, Paul; Mol, Hans G J; Vivó-Truyols, Gabriel

    2016-08-02

    In this work, we introduce an automated, efficient, and elegant model to combine all pieces of evidence (e.g., expected retention times, peak shapes, isotope distributions, fragment-to-parent ratio) obtained from liquid chromatography-tandem mass spectrometry (LC-MS/MS/MS) data for screening purposes. Combining all these pieces of evidence requires a careful assessment of the uncertainties in the analytical system as well as all possible outcomes. To-date, the majority of the existing algorithms are highly dependent on user input parameters. Additionally, the screening process is tackled as a deterministic problem. In this work we present a Bayesian framework to deal with the combination of all these pieces of evidence. Contrary to conventional algorithms, the information is treated in a probabilistic way, and a final probability assessment of the presence/absence of a compound feature is computed. Additionally, all the necessary parameters except the chromatographic band broadening for the method are learned from the data in training and learning phase of the algorithm, avoiding the introduction of a large number of user-defined parameters. The proposed method was validated with a large data set and has shown improved sensitivity and specificity in comparison to a threshold-based commercial software package.

  7. Active Control for Statistically Stationary Turbulent PremixedFlame Simulations

    SciTech Connect

    Bell, J.B.; Day, M.S.; Grcar, J.F.; Lijewski, M.J.

    2005-08-30

    The speed of propagation of a premixed turbulent flame correlates with the intensity of the turbulence encountered by the flame. One consequence of this property is that premixed flames in both laboratory experiments and practical combustors require some type of stabilization mechanism to prevent blow-off and flashback. The stabilization devices often introduce a level of geometric complexity that is prohibitive for detailed computational studies of turbulent flame dynamics. Furthermore, the stabilization introduces additional fluid mechanical complexity into the overall combustion process that can complicate the analysis of fundamental flame properties. To circumvent these difficulties we introduce a feedback control algorithm that allows us to computationally stabilize a turbulent premixed flame in a simple geometric configuration. For the simulations, we specify turbulent inflow conditions and dynamically adjust the integrated fueling rate to control the mean location of the flame in the domain. We outline the numerical procedure, and illustrate the behavior of the control algorithm on methane flames at various equivalence ratios in two dimensions. The simulation data are used to study the local variation in the speed of propagation due to flame surface curvature.

  8. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  9. Bayesian survival analysis in clinical trials: What methods are used in practice?

    PubMed

    Brard, Caroline; Le Teuff, Gwénaël; Le Deley, Marie-Cécile; Hampson, Lisa V

    2017-02-01

    Background Bayesian statistics are an appealing alternative to the traditional frequentist approach to designing, analysing, and reporting of clinical trials, especially in rare diseases. Time-to-event endpoints are widely used in many medical fields. There are additional complexities to designing Bayesian survival trials which arise from the need to specify a model for the survival distribution. The objective of this article was to critically review the use and reporting of Bayesian methods in survival trials. Methods A systematic review of clinical trials using Bayesian survival analyses was performed through PubMed and Web of Science databases. This was complemented by a full text search of the online repositories of pre-selected journals. Cost-effectiveness, dose-finding studies, meta-analyses, and methodological papers using clinical trials were excluded. Results In total, 28 articles met the inclusion criteria, 25 were original reports of clinical trials and 3 were re-analyses of a clinical trial. Most trials were in oncology (n = 25), were randomised controlled (n = 21) phase III trials (n = 13), and half considered a rare disease (n = 13). Bayesian approaches were used for monitoring in 14 trials and for the final analysis only in 14 trials. In the latter case, Bayesian survival analyses were used for the primary analysis in four cases, for the secondary analysis in seven cases, and for the trial re-analysis in three cases. Overall, 12 articles reported fitting Bayesian regression models (semi-parametric, n = 3; parametric, n = 9). Prior distributions were often incompletely reported: 20 articles did not define the prior distribution used for the parameter of interest. Over half of the trials used only non-informative priors for monitoring and the final analysis (n = 12) when it was specified. Indeed, no articles fitting Bayesian regression models placed informative priors on the parameter of interest. The prior for the treatment

  10. Bayesian anatomy of galaxy structure

    NASA Astrophysics Data System (ADS)

    Yoon, Ilsang

    In this thesis I develop Bayesian approach to model galaxy surface brightness and apply it to a bulge-disc decomposition analysis of galaxies in near-infrared band, from Two Micron All Sky Survey (2MASS). The thesis has three main parts. First part is a technical development of Bayesian galaxy image decomposition package GALPHAT based on Markov chain Monte Carlo algorithm. I implement a fast and accurate galaxy model image generation algorithm to reduce computation time and make Bayesian approach feasible for real science analysis using large ensemble of galaxies. I perform a benchmark test of G ALPHAT and demonstrate significant improvement in parameter estimation with a correct statistical confidence. Second part is a performance test for full Bayesian application to galaxy bulge-disc decomposition analysis including not only the parameter estimation but also the model comparison to classify different galaxy population. The test demonstrates that GALPHAT has enough statistical power to make a reliable model inference using galaxy photometric survey data. Bayesian prior update is also tested for parameter estimation and Bayes factor model comparison and it shows that informative prior significantly improves the model inference in every aspects. Last part is a Bayesian bulge-disc decomposition analysis using 2MASS Ks-band selected samples. I characterise the luminosity distributions in spheroids, bulges and discs separately in the local Universe and study the galaxy morphology correlation, by full utilizing the ensemble parameter posterior of the entire galaxy samples. It shows that to avoid a biased inference, the parameter covariance and model degeneracy has to be carefully characterized by the full probability distribution.

  11. What Is the Probability You Are a Bayesian?

    ERIC Educational Resources Information Center

    Wulff, Shaun S.; Robinson, Timothy J.

    2014-01-01

    Bayesian methodology continues to be widely used in statistical applications. As a result, it is increasingly important to introduce students to Bayesian thinking at early stages in their mathematics and statistics education. While many students in upper level probability courses can recite the differences in the Frequentist and Bayesian…

  12. SU-E-T-144: Bayesian Inference of Local Relapse Data Using a Poisson-Based Tumour Control Probability Model

    SciTech Connect

    La Russa, D

    2015-06-15

    Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributions found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.

  13. Bayesian robust principal component analysis.

    PubMed

    Ding, Xinghao; He, Lihan; Carin, Lawrence

    2011-12-01

    A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.

  14. Quality Control of High-Dose-Rate Brachytherapy: Treatment Delivery Analysis Using Statistical Process Control

    SciTech Connect

    Able, Charles M.; Bright, Megan; Frizzell, Bart

    2013-03-01

    Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles with 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.

  15. Bayesian superresolution

    NASA Astrophysics Data System (ADS)

    Isakson, Steve Wesley

    2001-12-01

    Well-known principles of physics explain why resolution restrictions occur in images produced by optical diffraction-limited systems. The limitations involved are present in all diffraction-limited imaging systems, including acoustical and microwave. In most circumstances, however, prior knowledge about the object and the imaging system can lead to resolution improvements. In this dissertation I outline a method to incorporate prior information into the process of reconstructing images to superresolve the object beyond the above limitations. This dissertation research develops the details of this methodology. The approach can provide the most-probable global solution employing a finite number of steps in both far-field and near-field images. In addition, in order to overcome the effects of noise present in any imaging system, this technique provides a weighted image that quantifies the likelihood of various imaging solutions. By utilizing Bayesian probability, the procedure is capable of incorporating prior information about both the object and the noise to overcome the resolution limitation present in many imaging systems. Finally I will present an imaging system capable of detecting the evanescent waves missing from far-field systems, thus improving the resolution further.

  16. Multileaf collimator performance monitoring and improvement using semiautomated quality control testing and statistical process control

    SciTech Connect

    Létourneau, Daniel McNiven, Andrea; Keller, Harald; Wang, An; Amin, Md Nurul; Pearce, Jim; Norrlinger, Bernhard; Jaffray, David A.

    2014-12-15

    Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods: The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves

  17. Toward improved prediction of the bedrock depth underneath hillslopes: Bayesian inference of the bottom-up control hypothesis using high-resolution topographic data

    NASA Astrophysics Data System (ADS)

    Gomes, Guilherme J. C.; Vrugt, Jasper A.; Vargas, Eurípedes A.

    2016-04-01

    The depth to bedrock controls a myriad of processes by influencing subsurface flow paths, erosion rates, soil moisture, and water uptake by plant roots. As hillslope interiors are very difficult and costly to illuminate and access, the topography of the bedrock surface is largely unknown. This essay is concerned with the prediction of spatial patterns in the depth to bedrock (DTB) using high-resolution topographic data, numerical modeling, and Bayesian analysis. Our DTB model builds on the bottom-up control on fresh-bedrock topography hypothesis of Rempe and Dietrich (2014) and includes a mass movement and bedrock-valley morphology term to extent the usefulness and general applicability of the model. We reconcile the DTB model with field observations using Bayesian analysis with the DREAM algorithm. We investigate explicitly the benefits of using spatially distributed parameter values to account implicitly, and in a relatively simple way, for rock mass heterogeneities that are very difficult, if not impossible, to characterize adequately in the field. We illustrate our method using an artificial data set of bedrock depth observations and then evaluate our DTB model with real-world data collected at the Papagaio river basin in Rio de Janeiro, Brazil. Our results demonstrate that the DTB model predicts accurately the observed bedrock depth data. The posterior mean DTB simulation is shown to be in good agreement with the measured data. The posterior prediction uncertainty of the DTB model can be propagated forward through hydromechanical models to derive probabilistic estimates of factors of safety.

  18. Bayesian Logical Data Analysis for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Gregory, Phil

    2010-05-01

    Preface; Acknowledgements; 1. Role of probability theory in science; 2. Probability theory as extended logic; 3. The how-to of Bayesian inference; 4. Assigning probabilities; 5. Frequentist statistical inference; 6. What is a statistic?; 7. Frequentist hypothesis testing; 8. Maximum entropy probabilities; 9. Bayesian inference (Gaussian errors); 10. Linear model fitting (Gaussian errors); 11. Nonlinear model fitting; 12. Markov Chain Monte Carlo; 13. Bayesian spectral analysis; 14. Bayesian inference (Poisson sampling); Appendix A. Singular value decomposition; Appendix B. Discrete Fourier transforms; Appendix C. Difference in two samples; Appendix D. Poisson ON/OFF details; Appendix E. Multivariate Gaussian from maximum entropy; References; Index.

  19. Comparing energy sources for surgical ablation of atrial fibrillation: a Bayesian network meta-analysis of randomized, controlled trials.

    PubMed

    Phan, Kevin; Xie, Ashleigh; Kumar, Narendra; Wong, Sophia; Medi, Caroline; La Meir, Mark; Yan, Tristan D

    2015-08-01

    Simplified maze procedures involving radiofrequency, cryoenergy and microwave energy sources have been increasingly utilized for surgical treatment of atrial fibrillation as an alternative to the traditional cut-and-sew approach. In the absence of direct comparisons, a Bayesian network meta-analysis is another alternative to assess the relative effect of different treatments, using indirect evidence. A Bayesian meta-analysis of indirect evidence was performed using 16 published randomized trials identified from 6 databases. Rank probability analysis was used to rank each intervention in terms of their probability of having the best outcome. Sinus rhythm prevalence beyond the 12-month follow-up was similar between the cut-and-sew, microwave and radiofrequency approaches, which were all ranked better than cryoablation (respectively, 39, 36, and 25 vs 1%). The cut-and-sew maze was ranked worst in terms of mortality outcomes compared with microwave, radiofrequency and cryoenergy (2 vs 19, 34, and 24%, respectively). The cut-and-sew maze procedure was associated with significantly lower stroke rates compared with microwave ablation [odds ratio <0.01; 95% confidence interval 0.00, 0.82], and ranked the best in terms of pacemaker requirements compared with microwave, radiofrequency and cryoenergy (81 vs 14, and 1, <0.01% respectively). Bayesian rank probability analysis shows that the cut-and-sew approach is associated with the best outcomes in terms of sinus rhythm prevalence and stroke outcomes, and remains the gold standard approach for AF treatment. Given the limitations of indirect comparison analysis, these results should be viewed with caution and not over-interpreted.

  20. An Automated Statistical Process Control Study of Inline Mixing Using Spectrophotometric Detection

    ERIC Educational Resources Information Center

    Dickey, Michael D.; Stewart, Michael D.; Willson, C. Grant

    2006-01-01

    An experiment is described, which is designed for a junior-level chemical engineering "fundamentals of measurements and data analysis" course, where students are introduced to the concept of statistical process control (SPC) through a simple inline mixing experiment. The students learn how to create and analyze control charts in an effort to…

  1. Advanced statistical process control: controlling sub-0.18-μm lithography and other processes

    NASA Astrophysics Data System (ADS)

    Zeidler, Amit; Veenstra, Klaas-Jelle; Zavecz, Terrence E.

    2001-08-01

    access of the analysis to include the external variables involved in CMP, deposition etc. We then applied yield analysis methods to identify the significant lithography-external process variables from the history of lots, subsequently adding the identified process variable to the signatures database and to the PPC calculations. With these improvements, the authors anticipate a 50% improvement of the process window. This improvement results in a significant reduction of rework and improved yield depending on process demands and equipment configuration. A statistical theory that explains the PPC is then presented. This theory can be used to simulate a general PPC application. In conclusion, the PPC concept is not lithography or semiconductors limited. In fact it is applicable for any production process that is signature biased (chemical industry, car industry, .). Requirements for the PPC are large data collection, a controllable process that is not too expensive to tune the process for every lot, and the ability to employ feedback calculations. PPC is a major change in the process management approach and therefor will first be employed where the need is high and the return on investment is very fast. The best industry to start with is the semiconductors and the most likely process area to start with is lithography.

  2. [Application of multivariate statistical analysis and thinking in quality control of Chinese medicine].

    PubMed

    Liu, Na; Li, Jun; Li, Bao-Guo

    2014-11-01

    The study of quality control of Chinese medicine has always been the hot and the difficulty spot of the development of traditional Chinese medicine (TCM), which is also one of the key problems restricting the modernization and internationalization of Chinese medicine. Multivariate statistical analysis is an analytical method which is suitable for the analysis of characteristics of TCM. It has been used widely in the study of quality control of TCM. Multivariate Statistical analysis was used for multivariate indicators and variables that appeared in the study of quality control and had certain correlation between each other, to find out the hidden law or the relationship between the data can be found,.which could apply to serve the decision-making and realize the effective quality evaluation of TCM. In this paper, the application of multivariate statistical analysis in the quality control of Chinese medicine was summarized, which could provided the basis for its further study.

  3. Bayesian Confirmation and Interpretation.

    ERIC Educational Resources Information Center

    Ellett, Frederick S., Jr.

    1984-01-01

    The author briefly characterizes two ways to confirm the empirical part of educational theories: the hypothetico-deductive method and the Bayesian method. It is argued that the Bayesian approach can be justified. (JMK)

  4. 75 FR 6209 - Guidance for Industry and Food and Drug Administration; Guidance for the Use of Bayesian...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ... thoughts on the appropriate use of Bayesian statistical methods in the design and analysis of medical... outlines FDA's current thinking on the use of Bayesian statistical methods in medical device clinical... guidance represents the agency's current thinking on ``Guidance for the Use of Bayesian Statistics...

  5. Bayesian analysis for kaon photoproduction

    SciTech Connect

    Marsainy, T. Mart, T.

    2014-09-25

    We have investigated contribution of the nucleon resonances in the kaon photoproduction process by using an established statistical decision making method, i.e. the Bayesian method. This method does not only evaluate the model over its entire parameter space, but also takes the prior information and experimental data into account. The result indicates that certain resonances have larger probabilities to contribute to the process.

  6. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  7. Decentralizing Statistical Accuracy Control Responsibility to the Ship Production Workforce (The National Shipbuilding Research Program)

    DTIC Science & Technology

    1987-01-01

    Fernando Ponce, Luis Torres, Al Vasquez, Ray Johnson; (standing) Dave Voigt, Sr., Joe Escar. cega, Lou Mansfield, Jim Elkins, Ted Mc- Callum, Minh...TRADES COUNCIL VALLEJO , CA Decentralization of Statistical Accuracy Control. Although variations of the Statistical Accuracy Con- trol process employed...NO. 3 SUBMITTED BY J. B. ‘HANK’ GERLACH, PRODUCTIVITY MANAGER, MARE ISLAND NAVAL SHIPYARD, VALLEJO , CA The authors and the National Steel and

  8. Bayesian classification theory

    NASA Technical Reports Server (NTRS)

    Hanson, Robin; Stutz, John; Cheeseman, Peter

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.

  9. Evaluation of a partial genome screening of two asthma susceptibility regions using bayesian network based bayesian multilevel analysis of relevance.

    PubMed

    Ungvári, Ildikó; Hullám, Gábor; Antal, Péter; Kiszel, Petra Sz; Gézsi, András; Hadadi, Éva; Virág, Viktor; Hajós, Gergely; Millinghoffer, András; Nagy, Adrienne; Kiss, András; Semsei, Ágnes F; Temesi, Gergely; Melegh, Béla; Kisfali, Péter; Széll, Márta; Bikov, András; Gálffy, Gabriella; Tamási, Lilla; Falus, András; Szalai, Csaba

    2012-01-01

    Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called bayesian network based bayesian multilevel analysis of relevance (BN-BMLA). This method uses bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated.With frequentist methods one SNP (rs3751464 in the FRMD6 gene) provided evidence for an association with asthma (OR = 1.43(1.2-1.8); p = 3×10(-4)). The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics.In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance.

  10. Application of machine learning and expert systems to Statistical Process Control (SPC) chart interpretation

    NASA Technical Reports Server (NTRS)

    Shewhart, Mark

    1991-01-01

    Statistical Process Control (SPC) charts are one of several tools used in quality control. Other tools include flow charts, histograms, cause and effect diagrams, check sheets, Pareto diagrams, graphs, and scatter diagrams. A control chart is simply a graph which indicates process variation over time. The purpose of drawing a control chart is to detect any changes in the process signalled by abnormal points or patterns on the graph. The Artificial Intelligence Support Center (AISC) of the Acquisition Logistics Division has developed a hybrid machine learning expert system prototype which automates the process of constructing and interpreting control charts.

  11. Matched case-control studies: a review of reported statistical methodology

    PubMed Central

    Niven, Daniel J; Berthiaume, Luc R; Fick, Gordon H; Laupland, Kevin B

    2012-01-01

    Background Case-control studies are a common and efficient means of studying rare diseases or illnesses with long latency periods. Matching of cases and controls is frequently employed to control the effects of known potential confounding variables. The analysis of matched data requires specific statistical methods. Methods The objective of this study was to determine the proportion of published, peer-reviewed matched case-control studies that used statistical methods appropriate for matched data. Using a comprehensive set of search criteria we identified 37 matched case-control studies for detailed analysis. Results Among these 37 articles, only 16 studies were analyzed with proper statistical techniques (43%). Studies that were properly analyzed were more likely to have included case patients with cancer and cardiovascular disease compared to those that did not use proper statistics (10/16 or 63%, versus 5/21 or 24%, P = 0.02). They were also more likely to have matched multiple controls for each case (14/16 or 88%, versus 13/21 or 62%, P = 0.08). In addition, studies with properly analyzed data were more likely to have been published in a journal with an impact factor listed in the top 100 according to the Journal Citation Reports index (12/16 or 69%, versus 1/21 or 5%, P ≤ 0.0001). Conclusion The findings of this study raise concern that the majority of matched case-control studies report results that are derived from improper statistical analyses. This may lead to errors in estimating the relationship between a disease and exposure, as well as the incorrect adaptation of emerging medical literature. PMID:22570570

  12. Intraplate volcanism controlled by back-arc and continental structures in NE Asia inferred from transdimensional Bayesian ambient noise tomography

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Rhie, Junkee; Chen, Youlin

    2016-08-01

    Intraplate volcanism adjacent to active continental margins is not simply explained by plate tectonics or plume interaction. Recent volcanoes in northeast (NE) Asia, including NE China and the Korean Peninsula, are characterized by heterogeneous tectonic structures and geochemical compositions. Here we apply a transdimensional Bayesian tomography to estimate high-resolution images of group and phase velocity variations (with periods between 8 and 70 s). The method provides robust estimations of velocity maps, and the reliability of results is tested through carefully designed synthetic recovery experiments. Our maps reveal two sublithospheric low-velocity anomalies that connect back-arc regions (in Japan and Ryukyu Trench) with current margins of continental lithosphere where the volcanoes are distributed. Combined with evidences from previous geochemical and geophysical studies, we argue that the volcanoes are related to the low-velocity structures associated with back-arc processes and preexisting continental lithosphere.

  13. Detecting Exoplanets using Bayesian Object Detection

    NASA Astrophysics Data System (ADS)

    Feroz, Farhan

    2015-08-01

    Detecting objects from noisy data-sets is common practice in astrophysics. Object detection presents a particular challenge in terms of statistical inference, not only because of its multi-modal nature but also because it combines both the parameter estimation (for characterizing objects) and model selection problems (in order to quantify the detection). Bayesian inference provides a mathematically rigorous solution to this problem by calculating marginal posterior probabilities of models with different number of sources, but the use of this method in astrophysics has been hampered by the computational cost of evaluating the Bayesian evidence. Nonetheless, Bayesian model selection has the potential to improve the interpretation of existing observational data. I will discuss several Bayesian approaches to object detection problems, both in terms of their theoretical framework and also the practical details about carrying out the computation. I will also describe some recent applications of these methods in the detection of exoplanets.

  14. Human-centered sensor-based Bayesian control: Increased energy efficiency and user satisfaction in commercial lighting

    NASA Astrophysics Data System (ADS)

    Granderson, Jessica Ann

    2007-12-01

    The need for sustainable, efficient energy systems is the motivation that drove this research, which targeted the design of an intelligent commercial lighting system. Lighting in commercial buildings consumes approximately 13% of all the electricity generated in the US. Advanced lighting controls1 intended for use in commercial office spaces have proven to save up to 45% in electricity consumption. However, they currently comprise only a fraction of the market share, resulting in a missed opportunity to conserve energy. The research goals driving this dissertation relate directly to barriers hindering widespread adoption---increase user satisfaction, and provide increased energy savings through more sophisticated control. To satisfy these goals an influence diagram was developed to perform daylighting actuation. This algorithm was designed to balance the potentially conflicting lighting preferences of building occupants, with the efficiency desires of building facilities management. A supervisory control policy was designed to implement load shedding under a demand response tariff. Such tariffs offer incentives for customers to reduce their consumption during periods of peak demand, trough price reductions. In developing the value function occupant user testing was conducted to determine that computer and paper tasks require different illuminance levels, and that user preferences are sufficiently consistent to attain statistical significance. Approximately ten facilities managers were also interviewed and surveyed to isolate their lighting preferences with respect to measures of lighting quality and energy savings. Results from both simulation and physical implementation and user testing indicate that the intelligent controller can increase occupant satisfaction, efficiency, cost savings, and management satisfaction, with respect to existing commercial daylighting systems. Several important contributions were realized by satisfying the research goals. A general

  15. Project T.E.A.M. (Technical Education Advancement Modules). Advanced Statistical Process Control.

    ERIC Educational Resources Information Center

    Dunlap, Dale

    This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 20-hour advanced statistical process control (SPC) and quality improvement course designed to develop the following competencies: (1) understanding quality systems; (2) knowing the process; (3) solving quality problems; and (4)…

  16. Mainstreaming Remedial Mathematics Students in Introductory Statistics: Results Using a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Logue, Alexandra W.; Watanabe-Rose, Mari

    2014-01-01

    This study used a randomized controlled trial to determine whether students, assessed by their community colleges as needing an elementary algebra (remedial) mathematics course, could instead succeed at least as well in a college-level, credit-bearing introductory statistics course with extra support (a weekly workshop). Researchers randomly…

  17. Analyzing a Mature Software Inspection Process Using Statistical Process Control (SPC)

    NASA Technical Reports Server (NTRS)

    Barnard, Julie; Carleton, Anita; Stamper, Darrell E. (Technical Monitor)

    1999-01-01

    This paper presents a cooperative effort where the Software Engineering Institute and the Space Shuttle Onboard Software Project could experiment applying Statistical Process Control (SPC) analysis to inspection activities. The topics include: 1) SPC Collaboration Overview; 2) SPC Collaboration Approach and Results; and 3) Lessons Learned.

  18. Total Quality Management: Statistics and Graphics II-Control Charts. AIR 1992 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Cherland, Ryan M.

    An examination was conducted of the control chart as a quality improvement statistical method often used by Total Quality Management (TQM) practitioners in higher education. The examination used an example based on actual requests for information gathered for the Director of Human Resources at a medical center at a midwestern university. The…

  19. Using Statistical Process Control Charts to Study Stuttering Frequency Variability during a Single Day

    ERIC Educational Resources Information Center

    Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark; Menzies, Ross; Packman, Ann

    2013-01-01

    Purpose: Stuttering varies between and within speaking situations. In this study, the authors used statistical process control charts with 10 case studies to investigate variability of stuttering frequency. Method: Participants were 10 adults who stutter. The authors counted the percentage of syllables stuttered (%SS) for segments of their speech…

  20. Sexual Abuse, Family Environment, and Psychological Symptoms: On the Validity of Statistical Control.

    ERIC Educational Resources Information Center

    Briere, John; Elliott, Diana M.

    1993-01-01

    Responds to article in which Nash et al. reported on effects of controlling for family environment when studying sexual abuse sequelae. Considers findings in terms of theoretical and statistical constraints placed on analysis of covariance and other partializing procedures. Questions use of covariate techniques to test hypotheses about causal role…

  1. Implementation of Statistical Process Control for Proteomic Experiments via LC MS/MS

    PubMed Central

    Bereman, Michael S.; Johnson, Richard; Bollinger, James; Boss, Yuval; Shulman, Nick; MacLean, Brendan; Hoofnagle, Andrew N.; MacCoss, Michael J.

    2014-01-01

    Statistical process control (SPC) is a robust set of tools that aids in the visualization, detection, and identification of assignable causes of variation in any process that creates products, services, or information. A tool has been developed termed Statistical Process Control in Proteomics (SProCoP) which implements aspects of SPC (e.g., control charts and Pareto analysis) into the Skyline proteomics software. It monitors five quality control metrics in a shotgun or targeted proteomic workflow. None of these metrics require peptide identification. The source code, written in the R statistical language, runs directly from the Skyline interface which supports the use of raw data files from several of the mass spectrometry vendors. It provides real time evaluation of the chromatographic performance (e.g., retention time reproducibility, peak asymmetry, and resolution); and mass spectrometric performance (targeted peptide ion intensity and mass measurement accuracy for high resolving power instruments) via control charts. Thresholds are experiment- and instrument-specific and are determined empirically from user-defined quality control standards that enable the separation of random noise and systematic error. Finally, Pareto analysis provides a summary of performance metrics and guides the user to metrics with high variance. The utility of these charts to evaluate proteomic experiments is illustrated in two case studies. PMID:24496601

  2. Application of statistical quality control measures for near-surface geochemical petroleum exploration

    NASA Astrophysics Data System (ADS)

    Belt, John Q.; Rice, Gary K.

    2002-02-01

    There are four major quality control measures that can apply to geochemical petroleum exploration data: statistical quality control charts, data reproducibility-Juran approach, ethane composition index, and hydrocarbon cross plots. Statistical quality control, or SQC, charts reflect the quality-performance of the analytical process composed of equipment, instrumentation, and operator technique. An unstable process is detected through assignable causes using SQC charts. Knowing data variability is paramount to tying geochemical data over time for in-fill samples and/or project extensions. The Juran approach is a statistical tool used to help determine the ability of a process to maintain itself within the limits of set specifications for reproducing geochemical data. Ethane composition index, or ECI, is a statistical calculation based on near-surface, light hydrocarbon measurements that help differentiate thermogenic, petroleum sources at depth. The ECI data are integrated with subsurface geological information, and/or seismic survey data to determine lower-risk drilling locations. Hydrocarbon cross plots are visual correlation techniques that compare two hydrocarbons within a similar hydrocarbon suite (e.g., ethane versus propane, benzene versus toluene, or 2-ring aromatics versus 3-ring aromatics). Cross plots help determine contamination, multiple petroleum sources, and low-quality data versus high-quality data indigenous to different geochemical exploration tools. When integrated with geomorphology, subsurface geology, and seismic survey data high-quality geochemical data provides beneficial information for developing a petroleum exploration model. High-quality data is the key to the successful application of geochemistry in petroleum exploration modeling. The ability to produce high-quality, geochemical data requires the application of quality control measures reflective of a well managed ISO 9000 quality system. Statistical quality control charts, Juran

  3. Bayesian Inference on Proportional Elections

    PubMed Central

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  4. Statistical process control for referrals by general practitioner at Health Insurance Organization clinics in Alexandria.

    PubMed

    Abdel Wahab, Moataza M; Nofal, Laila M; Guirguis, Wafaa W; Mahdy, Nehad H

    2004-01-01

    Quality control is the application of statistical techniques to a process in an effort to identify and minimize both random and non-random sources of variation. The present study aimed at the application of Statistical Process Control (SPC) to analyze the referrals by General Practitioners (GP) at Health Insurance Organization (HIO) clinics in Alexandria. Retrospective analysis of records and cross sectional interview to 180 GPs were done. Using the control charts (p chart), the present study confirmed the presence of substantial variation in referral rates from GPs to specialists; more than 60% of variation was of the special cause, which revealed that the process of referral in Alexandria (HIO) was completely out of statistical control. Control charts for referrals by GPs classified by different GP characteristics or organizational factors revealed much variation, which suggested that the variation was at the level of individual GPs. Furthermore, the p chart for each GP separately; which yielded a fewer number of points out of control (outliers), with an average of 4 points. For 26 GPs, there was no points out of control, those GPs were slightly older than those having points out of control. Otherwise, there was no significant difference between them. The revised p chart for those 26 GPs together yielded a centerline of 9.7%, upper control limit of 12.0% and lower control limit of 7.4%. Those limits were in good agreement with the limits specified by HIO; they can be suggested to be the new specification limits after some training programs.

  5. A Bayesian Analysis of Finite Mixtures in the LISREL Model.

    ERIC Educational Resources Information Center

    Zhu, Hong-Tu; Lee, Sik-Yum

    2001-01-01

    Proposes a Bayesian framework for estimating finite mixtures of the LISREL model. The model augments the observed data of the manifest variables with the latent variables and allocation variables and uses the Gibbs sampler to obtain the Bayesian solution. Discusses other associated statistical inferences. (SLD)

  6. Using Alien Coins to Test Whether Simple Inference Is Bayesian

    ERIC Educational Resources Information Center

    Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.

    2016-01-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…

  7. Bayesian Methods in Cosmology

    NASA Astrophysics Data System (ADS)

    Hobson, Michael P.; Jaffe, Andrew H.; Liddle, Andrew R.; Mukherjee, Pia; Parkinson, David

    2009-12-01

    Preface; Part I. Methods: 1. Foundations and algorithms John Skilling; 2. Simple applications of Bayesian methods D. S. Sivia and Steve Rawlings; 3. Parameter estimation using Monte Carlo sampling Antony Lewis and Sarah Bridle; 4. Model selection and multi-model interference Andrew R. Liddle, Pia Mukherjee and David Parkinson; 5. Bayesian experimental design and model selection forecasting Roberto Trotta, Martin Kunz, Pia Mukherjee and David Parkinson; 6. Signal separation in cosmology M. P. Hobson, M. A. J. Ashdown and V. Stolyarov; Part II. Applications: 7. Bayesian source extraction M. P. Hobson, Graça Rocha and R. Savage; 8. Flux measurement Daniel Mortlock; 9. Gravitational wave astronomy Neil Cornish; 10. Bayesian analysis of cosmic microwave background data Andrew H. Jaffe; 11. Bayesian multilevel modelling of cosmological populations Thomas J. Loredo and Martin A. Hendry; 12. A Bayesian approach to galaxy evolution studies Stefano Andreon; 13. Photometric redshift estimation: methods and applications Ofer Lahav, Filipe B. Abdalla and Manda Banerji; Index.

  8. Bayesian Methods in Cosmology

    NASA Astrophysics Data System (ADS)

    Hobson, Michael P.; Jaffe, Andrew H.; Liddle, Andrew R.; Mukherjee, Pia; Parkinson, David

    2014-02-01

    Preface; Part I. Methods: 1. Foundations and algorithms John Skilling; 2. Simple applications of Bayesian methods D. S. Sivia and Steve Rawlings; 3. Parameter estimation using Monte Carlo sampling Antony Lewis and Sarah Bridle; 4. Model selection and multi-model interference Andrew R. Liddle, Pia Mukherjee and David Parkinson; 5. Bayesian experimental design and model selection forecasting Roberto Trotta, Martin Kunz, Pia Mukherjee and David Parkinson; 6. Signal separation in cosmology M. P. Hobson, M. A. J. Ashdown and V. Stolyarov; Part II. Applications: 7. Bayesian source extraction M. P. Hobson, Graça Rocha and R. Savage; 8. Flux measurement Daniel Mortlock; 9. Gravitational wave astronomy Neil Cornish; 10. Bayesian analysis of cosmic microwave background data Andrew H. Jaffe; 11. Bayesian multilevel modelling of cosmological populations Thomas J. Loredo and Martin A. Hendry; 12. A Bayesian approach to galaxy evolution studies Stefano Andreon; 13. Photometric redshift estimation: methods and applications Ofer Lahav, Filipe B. Abdalla and Manda Banerji; Index.

  9. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    PubMed

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme.

  10. An introduction to using Bayesian linear regression with clinical data.

    PubMed

    Baldwin, Scott A; Larson, Michael J

    2016-12-31

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses.

  11. Use of prior manufacturer specifications with Bayesian logic eludes preliminary phase issues in quality control: an example in a hemostasis laboratory.

    PubMed

    Tsiamyrtzis, Panagiotis; Sobas, Frédéric; Négrier, Claude

    2015-07-01

    The present study seeks to demonstrate the feasibility of avoiding the preliminary phase, which is mandatory in all conventional approaches for internal quality control (IQC) management. Apart from savings on the resources consumed by the preliminary phase, the alternative approach described here is able to detect any analytic problems during the startup and provide a foundation for subsequent conventional assessment. A new dynamically updated predictive control chart (PCC) is used. Being Bayesian in concept, it utilizes available prior information. The manufacturer's prior quality control target value, the manufacturer's maximum acceptable interassay coefficient of variation value and the interassay standard deviation value defined during method validation in each laboratory, allow online IQC management. An Excel template, downloadable from journal website, allows easy implementation of this alternative approach in any laboratory. In the practical case of prothrombin percentage measurement, PCC gave no false alarms with respect to the 1ks rule (with same 5% false-alarm probability on a single control sample) during an overlap phase between two IQC batches. Moreover, PCCs were as effective as the 1ks rule in detecting increases in both random and systematic error after the minimal preliminary phase required by medical biology guidelines. PCCs can improve efficiency in medical biology laboratories.

  12. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  13. Whose statistical reasoning is facilitated by a causal structure intervention?

    PubMed

    McNair, Simon; Feeney, Aidan

    2015-02-01

    People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430-450, 2007) proposed that a causal Bayesian framework accounts for peoples' errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics.

  14. The use of statistical process control to improve the accuracy of turning

    NASA Astrophysics Data System (ADS)

    Pisarciuc, Cristian

    2016-11-01

    The present work deals with the turning process improvement using means of statistical process control. The approach on improvement is related to the fact that several methods are used in order to achieve quality defined by technical specifications. The experimental data is collected during identical and successive manufacturing processes of turning of an electrical motor shaft. The initial process presents some difficulties because many machined parts are nonconforming as a consequence of reduced precision of turning. The article is using data collected in turning process, presented through histograms and control charts, to improve the accuracy in order to reduce scrap.

  15. Neural network classification - A Bayesian interpretation

    NASA Technical Reports Server (NTRS)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  16. Automatic Thesaurus Construction Using Bayesian Networks.

    ERIC Educational Resources Information Center

    Park, Young C.; Choi, Key-Sun

    1996-01-01

    Discusses automatic thesaurus construction and characterizes the statistical behavior of terms by using an inference network. Highlights include low-frequency terms and data sparseness, Bayesian networks, collocation maps and term similarity, constructing a thesaurus from a collocation map, and experiments with test collections. (Author/LRW)

  17. Bayesian Inference of Galaxy Morphology

    NASA Astrophysics Data System (ADS)

    Yoon, Ilsang; Weinberg, M.; Katz, N.

    2011-01-01

    Reliable inference on galaxy morphology from quantitative analysis of ensemble galaxy images is challenging but essential ingredient in studying galaxy formation and evolution, utilizing current and forthcoming large scale surveys. To put galaxy image decomposition problem in broader context of statistical inference problem and derive a rigorous statistical confidence levels of the inference, I developed a novel galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes) that exploits recent developments in Bayesian computation to provide full posterior probability distributions and reliable confidence intervals for all parameters. I will highlight the significant improvements in galaxy image decomposition using GALPHAT, over the conventional model fitting algorithms and introduce the GALPHAT potential to infer the statistical distribution of galaxy morphological structures, using ensemble posteriors of galaxy morphological parameters from the entire galaxy population that one studies.

  18. Theoretical evaluation of the detectability of random lesions in bayesian emission reconstruction

    SciTech Connect

    Qi, Jinyi

    2003-05-01

    Detecting cancerous lesion is an important task in positron emission tomography (PET). Bayesian methods based on the maximum a posteriori principle (also called penalized maximum likelihood methods) have been developed to deal with the low signal to noise ratio in the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the prior parameters in Bayesian reconstruction control the resolution and noise trade-off and hence affect detectability of lesions in reconstructed images. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Most research has been based on Monte Carlo simulations, which are very time consuming. Building on the recent progress on the theoretical analysis of image properties of statistical reconstructions and the development of numerical observers, here we develop a theoretical approach for fast computation of lesion detectability in Bayesian reconstruction. The results can be used to choose the optimum hyperparameter for the maximum lesion detectability. New in this work is the use of theoretical expressions that explicitly model the statistical variation of the lesion and background without assuming that the object variation is (locally) stationary. The theoretical results are validated using Monte Carlo simulations. The comparisons show good agreement between the theoretical predications and the Monte Carlo results.

  19. Statistical Inference: The Big Picture.

    PubMed

    Kass, Robert E

    2011-02-01

    Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I suggest that a philosophy compatible with statistical practice, labelled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative "big picture" depiction.

  20. The role of control groups in mutagenicity studies: matching biological and statistical relevance.

    PubMed

    Hauschke, Dieter; Hothorn, Torsten; Schäfer, Juliane

    2003-06-01

    The statistical test of the conventional hypothesis of "no treatment effect" is commonly used in the evaluation of mutagenicity experiments. Failing to reject the hypothesis often leads to the conclusion in favour of safety. The major drawback of this indirect approach is that what is controlled by a prespecified level alpha is the probability of erroneously concluding hazard (producer risk). However, the primary concern of safety assessment is the control of the consumer risk, i.e. limiting the probability of erroneously concluding that a product is safe. In order to restrict this risk, safety has to be formulated as the alternative, and hazard, i.e. the opposite, has to be formulated as the hypothesis. The direct safety approach is examined for the case when the corresponding threshold value is expressed either as a fraction of the population mean for the negative control, or as a fraction of the difference between the positive and negative controls.

  1. Bayesian Mediation Analysis

    ERIC Educational Resources Information Center

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  2. Software For Multivariate Bayesian Classification

    NASA Technical Reports Server (NTRS)

    Saul, Ronald; Laird, Philip; Shelton, Robert

    1996-01-01

    PHD general-purpose classifier computer program. Uses Bayesian methods to classify vectors of real numbers, based on combination of statistical techniques that include multivariate density estimation, Parzen density kernels, and EM (Expectation Maximization) algorithm. By means of simple graphical interface, user trains classifier to recognize two or more classes of data and then use it to identify new data. Written in ANSI C for Unix systems and optimized for online classification applications. Embedded in another program, or runs by itself using simple graphical-user-interface. Online help files makes program easy to use.

  3. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  4. "APEC Blue" association with emission control and meteorological conditions detected by multi-scale statistics

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Dai, Xin-Gang

    2016-09-01

    The term "APEC Blue" has been created to describe the clear sky days since the Asia-Pacific Economic Cooperation (APEC) summit held in Beijing during November 5-11, 2014. The duration of the APEC Blue is detected from November 1 to November 14 (hereafter Blue Window) by moving t test in statistics. Observations show that APEC Blue corresponds to low air pollution with respect to PM2.5, PM10, SO2, and NO2 under strict emission-control measures (ECMs) implemented in Beijing and surrounding areas. Quantitative assessment shows that ECM is more effective on reducing aerosols than the chemical constituents. Statistical investigation has revealed that the window also resulted from intensified wind variability, as well as weakened static stability of atmosphere (SSA). The wind and ECMs played key roles in reducing air pollution during November 1-7 and 11-13, and strict ECMs and weak SSA become dominant during November 7-10 under weak wind environment. Moving correlation manifests that the emission reduction for aerosols can increase the apparent wind cleanup effect, leading to significant negative correlations of them, and the period-wise changes in emission rate can be well identified by multi-scale correlations basing on wavelet decomposition. In short, this case study manifests statistically how human interference modified air quality in the mega city through controlling local and surrounding emissions in association with meteorological condition.

  5. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  6. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2016-05-19

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  7. Distributed Estimation using Bayesian Consensus Filtering

    DTIC Science & Technology

    2014-06-06

    22] and Bayesian programming [23]. This paper focuses on developing a consensus framework for distributed Bayesian filters. The statistics literature...denote the set of inclusive neighbors. Let N, R, and Rm×n be the sets of natural numbers (positive integers ), real numbers, and m by n matrices. Let λ and...the bounded integral (´ X |f(x)| pdµ(x) )1/p , where µ is a measure on X . II. PRELIMINARIES In this section, we first state four assumptions used

  8. Current Challenges in Bayesian Model Choice

    NASA Astrophysics Data System (ADS)

    Clyde, M. A.; Berger, J. O.; Bullard, F.; Ford, E. B.; Jefferys, W. H.; Luo, R.; Paulo, R.; Loredo, T.

    2007-11-01

    Model selection (and the related issue of model uncertainty) arises in many astronomical problems, and, in particular, has been one of the focal areas of the Exoplanet working group under the SAMSI (Statistics and Applied Mathematical Sciences Institute) Astrostatistcs Exoplanet program. We provide an overview of the Bayesian approach to model selection and highlight the challenges involved in implementing Bayesian model choice in four stylized problems. We review some of the current methods used by statisticians and astronomers and present recent developments in the area. We discuss the applicability, computational challenges, and performance of suggested methods and conclude with recommendations and open questions.

  9. Pedestrian dynamics via Bayesian networks

    NASA Astrophysics Data System (ADS)

    Venkat, Ibrahim; Khader, Ahamad Tajudin; Subramanian, K. G.

    2014-06-01

    Studies on pedestrian dynamics have vital applications in crowd control management relevant to organizing safer large scale gatherings including pilgrimages. Reasoning pedestrian motion via computational intelligence techniques could be posed as a potential research problem within the realms of Artificial Intelligence. In this contribution, we propose a "Bayesian Network Model for Pedestrian Dynamics" (BNMPD) to reason the vast uncertainty imposed by pedestrian motion. With reference to key findings from literature which include simulation studies, we systematically identify: What are the various factors that could contribute to the prediction of crowd flow status? The proposed model unifies these factors in a cohesive manner using Bayesian Networks (BNs) and serves as a sophisticated probabilistic tool to simulate vital cause and effect relationships entailed in the pedestrian domain.

  10. A statistical methodology to improve accuracy in differentiating schizophrenia patients from healthy controls.

    PubMed

    Peters, Rosalind M; Gjini, Klevest; Templin, Thomas N; Boutros, Nash N

    2014-05-30

    We present a methodology to statistically discriminate among univariate and multivariate indices to improve accuracy in differentiating schizophrenia patients from healthy controls. Electroencephalogram data from 71 subjects (37 controls/34 patients) were analyzed. Data included P300 event-related response amplitudes and latencies as well as amplitudes and sensory gating indices derived from the P50, N100, and P200 auditory-evoked responses resulting in 20 indices analyzed. Receiver operator characteristic (ROC) curve analyses identified significant univariate indices; these underwent principal component analysis (PCA). Logistic regression of PCA components created a multivariate composite used in the final ROC. Eleven univariate ROCs were significant with area under the curve (AUC) >0.50. PCA of these indices resulted in a three-factor solution accounting for 76.96% of the variance. The first factor was defined primarily by P200 and P300 amplitudes, the second by P50 ratio and difference scores, and the third by P300 latency. ROC analysis using the logistic regression composite resulted in an AUC of 0.793 (0.06), p<0.001 (CI=0.685-0.901). A composite score of 0.456 had a sensitivity of 0.829 (correctly identifying schizophrenia patients) and a specificity of 0.703 (correctly identifying healthy controls). Results demonstrated the usefulness of combined statistical techniques in creating a multivariate composite that improves diagnostic accuracy.

  11. Using statistical process control to make data-based clinical decisions.

    PubMed Central

    Pfadt, A; Wheeler, D J

    1995-01-01

    Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered. PMID:7592154

  12. Bilingualism and inhibitory control influence statistical learning of novel word forms.

    PubMed

    Bartolotti, James; Marian, Viorica; Schroeder, Scott R; Shook, Anthony

    2011-01-01

    We examined the influence of bilingual experience and inhibitory control on the ability to learn a novel language. Using a statistical learning paradigm, participants learned words in two novel languages that were based on the International Morse Code. First, participants listened to a continuous stream of words in a Morse code language to test their ability to segment words from continuous speech. Since Morse code does not overlap in form with natural languages, interference from known languages was minimized. Next, participants listened to another Morse code language composed of new words that conflicted with the first Morse code language. Interference in this second language was high due to conflict between languages and due to the presence of two colliding cues (compressed pauses between words and statistical regularities) that competed to define word boundaries. Results suggest that bilingual experience can improve word learning when interference from other languages is low, while inhibitory control ability can improve word learning when interference from other languages is high. We conclude that the ability to extract novel words from continuous speech is a skill that is affected both by linguistic factors, such as bilingual experience, and by cognitive abilities, such as inhibitory control.

  13. Bayesian Estimation of Thermonuclear Reaction Rates

    NASA Astrophysics Data System (ADS)

    Iliadis, C.; Anderson, K. S.; Coc, A.; Timmes, F. X.; Starrfield, S.

    2016-11-01

    The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p,γ)3He, 3He(3He,2p)4He, and 3He(α,γ)7Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.

  14. Model Diagnostics for Bayesian Networks

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  15. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    PubMed

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications.

  16. Monitoring Actuarial Present Values of Term Life Insurance By a Statistical Process Control Chart

    NASA Astrophysics Data System (ADS)

    Hafidz Omar, M.

    2015-06-01

    Tracking performance of life insurance or similar insurance policy using standard statistical process control chart is complex because of many factors. In this work, we present the difficulty in doing so. However, with some modifications of the SPC charting framework, the difficulty can be manageable to the actuaries. So, we propose monitoring a simpler but natural actuarial quantity that is typically found in recursion formulas of reserves, profit testing, as well as present values. We shared some simulation results for the monitoring process. Additionally, some advantages of doing so is discussed.

  17. The Hybrid Synthetic Microdata Platform: A Method for Statistical Disclosure Control

    PubMed Central

    van den Heuvel, Edwin R.; Swertz, Morris A.

    2015-01-01

    Owners of biobanks are in an unfortunate position: on the one hand, they need to protect the privacy of their participants, whereas on the other, their usefulness relies on the disclosure of the data they hold. Existing methods for Statistical Disclosure Control attempt to find a balance between utility and confidentiality, but come at a cost for the analysts of the data. We outline an alternative perspective to the balance between confidentiality and utility. By combining the generation of synthetic data with the automated execution of data analyses, biobank owners can guarantee the privacy of their participants, yet allow the analysts to work in an unrestricted manner. PMID:26035007

  18. Preliminary Retrospective Analysis of Daily Tomotherapy Output Constancy Checks Using Statistical Process Control

    PubMed Central

    Menghi, Enrico; Marcocci, Francesco; Bianchini, David

    2016-01-01

    The purpose of this study was to retrospectively evaluate the results from a Helical TomoTherapy Hi-Art treatment system relating to quality controls based on daily static and dynamic output checks using statistical process control methods. Individual value X-charts, exponentially weighted moving average charts, and process capability and acceptability indices were used to monitor the treatment system performance. Daily output values measured from January 2014 to January 2015 were considered. The results obtained showed that, although the process was in control, there was an out-of-control situation in the principal maintenance intervention for the treatment system. In particular, process capability indices showed a decreasing percentage of points in control which was, however, acceptable according to AAPM TG148 guidelines. Our findings underline the importance of restricting the acceptable range of daily output checks and suggest a future line of investigation for a detailed process control of daily output checks for the Helical TomoTherapy Hi-Art treatment system. PMID:26848962

  19. A hybrid Bayesian hierarchical model combining cohort and case-control studies for meta-analysis of diagnostic tests: Accounting for partial verification bias.

    PubMed

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao

    2016-12-01

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented.

  20. IPUMS-International Statistical Disclosure Controls: 159 Census Microdata Samples in Dissemination, 100+ in Preparation

    PubMed Central

    McCaa, Robert; Ruggles, Steven; Sobek, Matt

    2016-01-01

    In the last decade, a revolution has occurred in access to census microdata for social and behavioral research. More than 325 million person records (55 countries, 159 samples) representing two-thirds of the world’s population are now readily available to bona fide researchers from the IPUMS-International website: www.ipums.org/international hosted by the Minnesota Population Center. Confidentialized extracts are disseminated on a restricted access basis at no cost to bona fide researchers. Over the next five years, from the microdata already entrusted by National Statistical Office-owners, the database will encompass more than 80 percent of the world’s population (85 countries, ~100 additional datasets) with priority given to samples from the 2010 round of censuses. A profile of the most frequently used samples and variables is described from 64,248 requests for microdata extracts. The development of privacy protection standards by National Statistical Offices, international organizations and academic experts is fundamental to eliciting world-wide cooperation and, thus, to the success of the IPUMS initiative. This paper summarizes the legal, administrative and technical underpinnings of the project, including statistical disclosure controls, as well as the conclusions of a lengthy on-site review by the former Australian Statistician, Mr. Dennis Trewin.

  1. Bayesian Spectroscopy and Target Tracking

    SciTech Connect

    Cunningham, C

    2001-05-01

    Statistical analysis gives a paradigm for detection and tracking of weak-signature sources that are moving among a network of detectors. The detector platforms compute and exchange information with near-neighbors in the form of Bayesian probabilities for possible sources. This can shown to be an optimal scheme for the use of detector information and communication resources. Here, we apply that paradigm to the detection and discrimination of radiation sources using multi-channel gamma-ray spectra. We present algorithms for the reduction of detector data to probability estimates and the fusion of estimates among multiple detectors. A primary result is the development of a goodness-of-fit metric, similar to {chi}{sup 2}, for template matching that is statistically valid for spectral channels with low expected counts. Discrimination of a target source from other false sources and detection of imprecisely known spectra are the main applications considered. We use simulated NaI spectral data to demonstrate the Bayesian algorithm compare it to other techniques. Results of simulations of a network of spectrometers are presented, showing its capability to distinguish intended targets from nuisance sources.

  2. Statistical Methods for Astronomy

    NASA Astrophysics Data System (ADS)

    Feigelson, Eric D.; Babu, G. Jogesh

    Statistical methodology, with deep roots in probability theory, providesquantitative procedures for extracting scientific knowledge from astronomical dataand for testing astrophysical theory. In recent decades, statistics has enormouslyincreased in scope and sophistication. After a historical perspective, this reviewoutlines concepts of mathematical statistics, elements of probability theory,hypothesis tests, and point estimation. Least squares, maximum likelihood, andBayesian approaches to statistical inference are outlined. Resampling methods,particularly the bootstrap, provide valuable procedures when distributionsfunctions of statistics are not known. Several approaches to model selection andgoodness of fit are considered.

  3. Statistical Inference

    NASA Astrophysics Data System (ADS)

    Khan, Shahjahan

    Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden "jewels" in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model

  4. Statistical Inference

    NASA Astrophysics Data System (ADS)

    Khan, Shahjahan

    Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden “jewels” in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model

  5. Variable selection in Bayesian generalized linear-mixed models: an illustration using candidate gene case-control association studies.

    PubMed

    Tsai, Miao-Yu

    2015-03-01

    The problem of variable selection in the generalized linear-mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a "higher posterior probability model with bootstrap" (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene-gene associations, after adjusting for complex structures among clusters.

  6. The Bayesian t-test and beyond.

    PubMed

    Gönen, Mithat

    2010-01-01

    In this chapter we will explore Bayesian alternatives to the t-test. We saw in Chapter 1 how t-test can be used to test whether the expected outcomes of the two groups are equal or not. In Chapter 3 we saw how to make inferences from a Bayesian perspective in principle. In this chapter we will put these together to develop a Bayesian procedure for a t-test. This procedure depends on the data only through the t-statistic. It requires prior inputs and we will discuss how to assign them. We will use an example from a microarray study as to demonstrate the practical issues. The microarray study is an important application for the Bayesian t-test as it naturally brings up the question of simultaneous t-tests. It turns out that the Bayesian procedure can easily be extended to carry several t-tests on the same data set, provided some attention is paid to the concept of the correlation between tests.

  7. Computationally efficient Bayesian inference for inverse problems.

    SciTech Connect

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  8. Using a statistical process control chart during the quality assessment of cancer registry data.

    PubMed

    Myles, Zachary M; German, Robert R; Wilson, Reda J; Wu, Manxia

    2011-01-01

    Statistical process control (SPC) charts may be used to detect acute variations in the data while simultaneously evaluating unforeseen aberrations that may warrant further investigation by the data user. Using cancer stage data captured by the Summary Stage 2000 (SS2000) variable, we sought to present a brief report highlighting the utility of the SPC chart during the quality assessment of cancer registry data. Using a county-level caseload for the diagnosis period of 2001-2004 (n=25,648), we found the overall variation of the SS2000 variable to be in control during diagnosis years of 2001 and 2002, exceeded the lower control limit (LCL) in 2003, and exceeded the upper control limit (UCL) in 2004; in situ/localized stages were in control throughout the diagnosis period, regional stage exceeded UCL in 2004, and distant stage exceeded the LCL in 2001 and the UCL in 2004. Our application of the SPC chart with cancer registry data illustrates that the SPC chart may serve as a readily available and timely tool for identifying areas of concern during the data collection and quality assessment of central cancer registry data.

  9. Register Control of Roll-to-Roll Printing System Based on Statistical Approach

    NASA Astrophysics Data System (ADS)

    Kim, Chung Hwan; You, Ha-Il; Jo, Jeongdai

    2013-05-01

    One of the most important requirements when using roll-to-roll printing equipment for multilayer printing is register control. Because multilayer printing requires a printing accuracy of several microns to several tens of microns, depending on the devices and their sizes, precise register control is required. In general, the register errors vary with time, even for one revolution of the plate cylinder. Therefore, more information about the register errors in one revolution of the plate cylinder is required for more precise register control, which is achieved by using multiple register marks in a single revolution of the plate cylinder. By using a larger number of register marks, we can define the value of the register error as a statistical value rather than a single one. The register errors measured from an actual roll-to-roll printing system consist of a linearly varying term, a static offset term, and small fluctuations. The register errors resulting from the linearly varying term and the offset term are compensated for by the velocity and phase control of the plate cylinders, based on the calculated slope and offset of the register errors, which are obtained by the curve-fitting of the data set of register errors. We show that even with the slope and offset compensation of the register errors, a register control performance of within 20 µm can be achieved.

  10. Statistical process control of cocrystallization processes: A comparison between OPLS and PLS.

    PubMed

    Silva, Ana F T; Sarraguça, Mafalda Cruz; Ribeiro, Paulo R; Santos, Adenilson O; De Beer, Thomas; Lopes, João Almeida

    2017-03-30

    Orthogonal partial least squares regression (OPLS) is being increasingly adopted as an alternative to partial least squares (PLS) regression due to the better generalization that can be achieved. Particularly in multivariate batch statistical process control (BSPC), the use of OPLS for estimating nominal trajectories is advantageous. In OPLS, the nominal process trajectories are expected to be captured in a single predictive principal component while uncorrelated variations are filtered out to orthogonal principal components. In theory, OPLS will yield a better estimation of the Hotelling's T(2) statistic and corresponding control limits thus lowering the number of false positives and false negatives when assessing the process disturbances. Although OPLS advantages have been demonstrated in the context of regression, its use on BSPC was seldom reported. This study proposes an OPLS-based approach for BSPC of a cocrystallization process between hydrochlorothiazide and p-aminobenzoic acid monitored on-line with near infrared spectroscopy and compares the fault detection performance with the same approach based on PLS. A series of cocrystallization batches with imposed disturbances were used to test the ability to detect abnormal situations by OPLS and PLS-based BSPC methods. Results demonstrated that OPLS was generally superior in terms of sensibility and specificity in most situations. In some abnormal batches, it was found that the imposed disturbances were only detected with OPLS.

  11. Implementation of Statistical Process Control: Evaluating the Mechanical Performance of a Candidate Silicone Elastomer Docking Seal

    NASA Technical Reports Server (NTRS)

    Oravec, Heather Ann; Daniels, Christopher C.

    2014-01-01

    The National Aeronautics and Space Administration has been developing a novel docking system to meet the requirements of future exploration missions to low-Earth orbit and beyond. A dynamic gas pressure seal is located at the main interface between the active and passive mating components of the new docking system. This seal is designed to operate in the harsh space environment, but is also to perform within strict loading requirements while maintaining an acceptable level of leak rate. In this study, a candidate silicone elastomer seal was designed, and multiple subscale test articles were manufactured for evaluation purposes. The force required to fully compress each test article at room temperature was quantified and found to be below the maximum allowable load for the docking system. However, a significant amount of scatter was observed in the test results. Due to the stochastic nature of the mechanical performance of this candidate docking seal, a statistical process control technique was implemented to isolate unusual compression behavior from typical mechanical performance. The results of this statistical analysis indicated a lack of process control, suggesting a variation in the manufacturing phase of the process. Further investigation revealed that changes in the manufacturing molding process had occurred which may have influenced the mechanical performance of the seal. This knowledge improves the chance of this and future space seals to satisfy or exceed design specifications.

  12. Space Shuttle RTOS Bayesian Network

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  13. Bayesian microsaccade detection

    PubMed Central

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  14. Characterization of a Saccharomyces cerevisiae fermentation process for production of a therapeutic recombinant protein using a multivariate Bayesian approach.

    PubMed

    Fu, Zhibiao; Baker, Daniel; Cheng, Aili; Leighton, Julie; Appelbaum, Edward; Aon, Juan

    2016-05-01

    The principle of quality by design (QbD) has been widely applied to biopharmaceutical manufacturing processes. Process characterization is an essential step to implement the QbD concept to establish the design space and to define the proven acceptable ranges (PAR) for critical process parameters (CPPs). In this study, we present characterization of a Saccharomyces cerevisiae fermentation process using risk assessment analysis, statistical design of experiments (DoE), and the multivariate Bayesian predictive approach. The critical quality attributes (CQAs) and CPPs were identified with a risk assessment. The statistical model for each attribute was established using the results from the DoE study with consideration given to interactions between CPPs. Both the conventional overlapping contour plot and the multivariate Bayesian predictive approaches were used to establish the region of process operating conditions where all attributes met their specifications simultaneously. The quantitative Bayesian predictive approach was chosen to define the PARs for the CPPs, which apply to the manufacturing control strategy. Experience from the 10,000 L manufacturing scale process validation, including 64 continued process verification batches, indicates that the CPPs remain under a state of control and within the established PARs. The end product quality attributes were within their drug substance specifications. The probability generated with the Bayesian approach was also used as a tool to assess CPP deviations. This approach can be extended to develop other production process characterization and quantify a reliable operating region. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:799-812, 2016.

  15. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation.

  16. RESEARCH PAPERS : Statistical inversion of controlled-source seismic data using parabolic wave scattering theory

    NASA Astrophysics Data System (ADS)

    Line, C. E. R.; Hobbs, R. W.; Hudson, J. A.; Snyder, D. B.

    1998-01-01

    Statistical parameters describing heterogeneity in the Proterozoic basement of the Baltic Shield were estimated from controlled-source seismic data, using a statistical inversion based on the theory of wave propagation through random media (WPRM), derived from the parabolic wave approximation. Synthetic plane-wave seismograms generated from models of random media show consistency with WPRM theory for forward propagation in the weak-scattering regime, whilst for two-way propagation a discrepancy exists that is due to contamination of the primary wave by backscattered energy. Inverse modelling of the real seismic data suggests that the upper crust to depths of ~ 15 km can be characterized, subject to the range of spatial resolution of the method, by a medium with an exponential spatial autocorrelation function, an rms velocity fluctuation of 1.5 +/- 0.5 per cent and a correlation length of 150 +/- 50 m. Further inversions show that scattering is predominantly occurring in the uppermost ~ 2 km of crust, where rms velocity fluctuation is 3 - 6 per cent. Although values of correlation distance are well constrained by these inversions, there is a trade-off between thickness of scattering layer and rms velocity perturbation estimates, with both being relatively poorly resolved. The higher near-surface heterogeneity is interpreted to arise from fractures in the basement rocks that close under lithostatic pressure for depths greater than 2 - 3 km.

  17. IMPORTANCE OF MATERIAL BALANCES AND THEIR STATISTICAL EVALUATION IN RUSSIAN MATERIAL, PROTECTION, CONTROL AND ACCOUNTING

    SciTech Connect

    FISHBONE,L.G.

    1999-07-25

    While substantial work has been performed in the Russian MPC&A Program, much more needs to be done at Russian nuclear facilities to complete four necessary steps. These are (1) periodically measuring the physical inventory of nuclear material, (2) continuously measuring the flows of nuclear material, (3) using the results to close the material balance, particularly at bulk processing facilities, and (4) statistically evaluating any apparent loss of nuclear material. The periodic closing of material balances provides an objective test of the facility's system of nuclear material protection, control and accounting. The statistical evaluation using the uncertainties associated with individual measurement systems involved in the calculation of the material balance provides a fair standard for concluding whether the apparent loss of nuclear material means a diversion or whether the facility's accounting system needs improvement. In particular, if unattractive flow material at a facility is not measured well, the accounting system cannot readily detect the loss of attractive material if the latter substantially derives from the former.

  18. Bayesian analysis on meta-analysis of case-control studies accounting for within-study correlation.

    PubMed

    Chen, Yong; Chu, Haitao; Luo, Sheng; Nie, Lei; Chen, Sining

    2015-12-01

    In retrospective studies, odds ratio is often used as the measure of association. Under independent beta prior assumption, the exact posterior distribution of odds ratio given a single 2 × 2 table has been derived in the literature. However, independence between risks within the same study may be an oversimplified assumption because cases and controls in the same study are likely to share some common factors and thus to be correlated. Furthermore, in a meta-analysis of case-control studies, investigators usually have multiple 2 × 2 tables. In this article, we first extend the published results on a single 2 × 2 table to allow within study prior correlation while retaining the advantage of closed-form posterior formula, and then extend the results to multiple 2 × 2 tables and regression setting. The hyperparameters, including within study correlation, are estimated via an empirical Bayes approach. The overall odds ratio and the exact posterior distribution of the study-specific odds ratio are inferred based on the estimated hyperparameters. We conduct simulation studies to verify our exact posterior distribution formulas and investigate the finite sample properties of the inference for the overall odds ratio. The results are illustrated through a twin study for genetic heritability and a meta-analysis for the association between the N-acetyltransferase 2 (NAT2) acetylation status and colorectal cancer.

  19. Statistical Analysis of Crossed Undulator for Polarization Control in a SASE FEL

    SciTech Connect

    Ding, Yuantao; Huang, Zhirong; /SLAC

    2008-02-01

    There is a growing interest in producing intense, coherent x-ray radiation with an adjustable and arbitrary polarization state. In this paper, we study the crossed undulator scheme (K.-J. Kim, Nucl. Instrum. Methods A 445, 329 (2000)) for rapid polarization control in a self-amplified spontaneous emission (SASE) free electron laser (FEL). Because a SASE source is a temporally chaotic light, we perform a statistical analysis on the state of polarization using FEL theory and simulations. We show that by adding a small phase shifter and a short (about 1.3 times the FEL power gain length), 90{sup o} rotated planar undulator after the main SASE planar undulator, one can obtain circularly polarized light--with over 80% polarization--near the FEL saturation.

  20. A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests.

    PubMed

    Sassenhagen, Jona; Alday, Phillip M

    2016-11-01

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.

  1. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control.

    PubMed

    Adeleke, Jude Adekunle; Moodley, Deshendran; Rens, Gavin; Adewumi, Aderemi Oluyinka

    2017-04-09

    Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM 2 . 5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM 2 . 5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.

  2. Assessing language dominance with functional MRI: the role of control tasks and statistical analysis.

    PubMed

    Dodoo-Schittko, Frank; Rosengarth, Katharina; Doenitz, Christian; Greenlee, Mark W

    2012-09-01

    There is a discrepancy between the brain regions revealed by functional neuroimaging techniques and those brain regions where a loss of function, either by lesion or by electrocortical stimulation, induces language disorders. To differentiate between essential and non-essential language-related processes, we investigated the effects of linguistic control tasks and different analysis methods for functional MRI data. Twelve subjects solved two linguistic generation tasks: (1) a verb generation task and (2) an antonym generation task (each with a linguistic control task on the phonological level) as well as two decision tasks of semantic congruency (each with a cognitive high-level control task). Differential contrasts and conjunction analyses were carried out on the single-subject level and an individual lateralization index (LI) was computed. On the group level we determined the percent signal change in the left inferior frontal gyrus (IFG: BA 44 and BA 45). The conjunction analysis of multiple language tasks led to significantly greater absolute LIs than the LIs based on the single task versus fixation contrasts. A further significant increase of the magnitude of the LIs could be achieved by using the phonological control conditions. Although the decision tasks appear to be more robust to changes in the statistical threshold, the combined generation tasks had an advantage over the decision tasks both for assessing language dominance and locating Broca's area. These results underline the need for conjunction analysis based on several language tasks to suppress highly task-specific processes. They also point to the need for high-level cognitive control tasks to partial out general, language supporting but not language critical processes. Higher absolute LIs, which reflect unambiguously hemispheric language dominance, can be thus obtained.

  3. Statistical process control analysis for patient-specific IMRT and VMAT QA.

    PubMed

    Sanghangthum, Taweap; Suriyapee, Sivalee; Srisatit, Somyot; Pawlicki, Todd

    2013-05-01

    This work applied statistical process control to establish the control limits of the % gamma pass of patient-specific intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) quality assurance (QA), and to evaluate the efficiency of the QA process by using the process capability index (Cpml). A total of 278 IMRT QA plans in nasopharyngeal carcinoma were measured with MapCHECK, while 159 VMAT QA plans were undertaken with ArcCHECK. Six megavolts with nine fields were used for the IMRT plan and 2.5 arcs were used to generate the VMAT plans. The gamma (3%/3 mm) criteria were used to evaluate the QA plans. The % gamma passes were plotted on a control chart. The first 50 data points were employed to calculate the control limits. The Cpml was calculated to evaluate the capability of the IMRT/VMAT QA process. The results showed higher systematic errors in IMRT QA than VMAT QA due to the more complicated setup used in IMRT QA. The variation of random errors was also larger in IMRT QA than VMAT QA because the VMAT plan has more continuity of dose distribution. The average % gamma pass was 93.7% ± 3.7% for IMRT and 96.7% ± 2.2% for VMAT. The Cpml value of IMRT QA was 1.60 and VMAT QA was 1.99, which implied that the VMAT QA process was more accurate than the IMRT QA process. Our lower control limit for % gamma pass of IMRT is 85.0%, while the limit for VMAT is 90%. Both the IMRT and VMAT QA processes are good quality because Cpml values are higher than 1.0.

  4. Comparative efficacy and tolerability of duloxetine, pregabalin, and milnacipran for the treatment of fibromyalgia: a Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Lee, Young Ho; Song, Gwan Gyu

    2016-05-01

    The aim of this study was to assess the relative efficacy and tolerability of duloxetine, pregabalin, and milnacipran at the recommended doses in patients with fibromyalgia. Randomized controlled trials (RCTs) examining the efficacy and safety of duloxetine 60 mg, pregabalin 300 mg, pregabalin 150 mg, milnacipran 200 mg, and milnacipran 100 mg compared to placebo in patients with fibromyalgia were included in this Bayesian network meta-analysis. Nine RCTs including 5140 patients met the inclusion criteria. The proportion of patients with >30 % improvement from baseline in pain was significantly higher in the duloxetine 60 mg, pregabalin 300 mg, milnacipran 100 mg, and milnacipran 200 mg groups than in the placebo group [pairwise odds ratio (OR) 2.33, 95 % credible interval (CrI) 1.50-3.67; OR 1.68, 95 % CrI 1.25-2.28; OR 1.62, 95 % CrI 1.16-2.25; and OR 1.61; 95 % CrI 1.15-2.24, respectively]. Ranking probability based on the surface under the cumulative ranking curve (SUCRA) indicated that duloxetine 60 mg had the highest probability of being the best treatment for achieving the response level (SUCRA = 0.9431), followed by pregabalin 300 mg (SUCRA = 0.6300), milnacipran 100 mg (SUCRA = 0.5680), milnacipran 200 mg (SUCRA = 0.5617), pregabalin 150 mg (SUCRA = 0.2392), and placebo (SUCRA = 0.0580). The risk of withdrawal due to adverse events was lower in the placebo group than in the pregabalin 300 mg, duloxetine 60 mg, milnacipran 100 mg, and milnacipran 200 mg groups. However, there was no significant difference in the efficacy and tolerability between the medications at the recommended doses. Duloxetine 60 mg, pregabalin 300 mg, milnacipran 100 mg, and milnacipran 200 mg were more efficacious than placebo. However, there was no significant difference in the efficacy and tolerability between the medications at the recommended doses.

  5. Statistical quality control for volumetric modulated arc therapy (VMAT) delivery by using the machine's log data

    NASA Astrophysics Data System (ADS)

    Cheong, Kwang-Ho; Lee, Me-Yeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, Soah; Hwang, Taejin; Kim, Haeyoung; Kim, Kyoung Ju; Han, Tae Jin; Bae, Hoonsik

    2015-07-01

    The aim of this study is to set up statistical quality control for monitoring the volumetric modulated arc therapy (VMAT) delivery error by using the machine's log data. Eclipse and a Clinac iX linac with the RapidArc system (Varian Medical Systems, Palo Alto, USA) are used for delivery of the VMAT plan. During the delivery of the RapidArc fields, the machine determines the delivered monitor units (MUs) and the gantry angle's position accuracy and the standard deviations of the MU ( σMU: dosimetric error) and the gantry angle ( σGA: geometric error) are displayed on the console monitor after completion of the RapidArc delivery. In the present study, first, the log data were analyzed to confirm its validity and usability; then, statistical process control (SPC) was applied to monitor the σMU and the σGA in a timely manner for all RapidArc fields: a total of 195 arc fields for 99 patients. The MU and the GA were determined twice for all fields, that is, first during the patient-specific plan QA and then again during the first treatment. The sMU and the σGA time series were quite stable irrespective of the treatment site; however, the sGA strongly depended on the gantry's rotation speed. The σGA of the RapidArc delivery for stereotactic body radiation therapy (SBRT) was smaller than that for the typical VMAT. Therefore, SPC was applied for SBRT cases and general cases respectively. Moreover, the accuracy of the potential meter of the gantry rotation is important because the σGA can change dramatically due to its condition. By applying SPC to the σMU and σGA, we could monitor the delivery error efficiently. However, the upper and the lower limits of SPC need to be determined carefully with full knowledge of the machine and log data.

  6. Asymptotic analysis of Bayesian generalization error with Newton diagram.

    PubMed

    Yamazaki, Keisuke; Aoyagi, Miki; Watanabe, Sumio

    2010-01-01

    Statistical learning machines that have singularities in the parameter space, such as hidden Markov models, Bayesian networks, and neural networks, are widely used in the field of information engineering. Singularities in the parameter space determine the accuracy of estimation in the Bayesian scenario. The Newton diagram in algebraic geometry is recognized as an effective method by which to investigate a singularity. The present paper proposes a new technique to plug the diagram in the Bayesian analysis. The proposed technique allows the generalization error to be clarified and provides a foundation for an efficient model selection. We apply the proposed technique to mixtures of binomial distributions.

  7. Particle identification in ALICE: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Benacek, P.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kostarakis, P.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; León Vargas, H.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Moreira De Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paić, G.; Pal, S. K.; Pan, J.; Pandey, A. K.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Pereira Da Costa, H.; Peresunko, D.; Pérez Lara, C. E.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Šándor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Souza, R. D. de; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Valencia Palomo, L.; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, H.; Yang, P.; Yano, S.; Yasin, Z.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.

    2016-05-01

    We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss ( d E/d x) and time of flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high-purity samples of identified particles in the decay channels K0S → π-π+, φ→ K-K+, and Λ→ p π- in p-Pb collisions at √{s_{NN}}=5.02 TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected pT spectra of pions, kaons, protons, and D0 mesons in pp collisions at √{s}=7 TeV. In all cases, the results using Bayesian PID were found to be consistent with previous measurements performed by ALICE using a standard PID approach. For the measurement of D0 → K-π+, it was found that a Bayesian PID approach gave a higher signal-to-background ratio and a similar or larger statistical significance when compared with standard PID selections, despite a reduced identification efficiency. Finally, we present an exploratory study of the measurement of Λc+ → p K-π+ in pp collisions at √{s}=7 TeV, using the Bayesian approach for the identification of its decay products.

  8. Bayesian homeopathy: talking normal again.

    PubMed

    Rutten, A L B

    2007-04-01

    Homeopathy has a communication problem: important homeopathic concepts are not understood by conventional colleagues. Homeopathic terminology seems to be comprehensible only after practical experience of homeopathy. The main problem lies in different handling of diagnosis. In conventional medicine diagnosis is the starting point for randomised controlled trials to determine the effect of treatment. In homeopathy diagnosis is combined with other symptoms and personal traits of the patient to guide treatment and predict response. Broadening our scope to include diagnostic as well as treatment research opens the possibility of multi factorial reasoning. Adopting Bayesian methodology opens the possibility of investigating homeopathy in everyday practice and of describing some aspects of homeopathy in conventional terms.

  9. Quantification Of Margins And Uncertainties: A Bayesian Approach (full Paper)

    SciTech Connect

    Wallstrom, Timothy C

    2008-01-01

    Quantification of Margins and Uncertainties (QMU) is 'a formalism for dealing with the reliability of complex technical systems, and the confidence which can be placed in estimates of that reliability.' (Eardleyet al, 2005). In this paper, we show how QMU may be interpreted in the framework of Bayesian statistical inference, using a probabilistic network. The Bayesian approach clarifies the probabilistic underpinnings of the formalism, and shows how the formalism can be used for deciSion-making.

  10. Bayesian Face Sketch Synthesis.

    PubMed

    Wang, Nannan; Gao, Xinbo; Sun, Leiyu; Li, Jie

    2017-03-01

    Exemplar-based face sketch synthesis has been widely applied to both digital entertainment and law enforcement. In this paper, we propose a Bayesian framework for face sketch synthesis, which provides a systematic interpretation for understanding the common properties and intrinsic difference in different methods from the perspective of probabilistic graphical models. The proposed Bayesian framework consists of two parts: the neighbor selection model and the weight computation model. Within the proposed framework, we further propose a Bayesian face sketch synthesis method. The essential rationale behind the proposed Bayesian method is that we take the spatial neighboring constraint between adjacent image patches into consideration for both aforementioned models, while the state-of-the-art methods neglect the constraint either in the neighbor selection model or in the weight computation model. Extensive experiments on the Chinese University of Hong Kong face sketch database demonstrate that the proposed Bayesian method could achieve superior performance compared with the state-of-the-art methods in terms of both subjective perceptions and objective evaluations.

  11. Childhood autism in India: A case-control study using tract-based spatial statistics analysis

    PubMed Central

    Assis, Zarina Abdul; Bagepally, Bhavani Shankara; Saini, Jitender; Srinath, Shoba; Bharath, Rose Dawn; Naidu, Purushotham R.; Gupta, Arun Kumar

    2015-01-01

    Context: Autism is a serious behavioral disorder among young children that now occurs at epidemic rates in developing countries like India. We have used tract-based spatial statistics (TBSS) of diffusion tensor imaging (DTI) measures to investigate the microstructure of primary neurocircuitry involved in autistic spectral disorders as compared to the typically developed children. Objective: To evaluate the various white matter tracts in Indian autistic children as compared to the controls using TBSS. Materials and Methods: Prospective, case-control, voxel-based, whole-brain DTI analysis using TBSS was performed. The study included 19 autistic children (mean age 8.7 years ± 3.84, 16 males and 3 females) and 34 controls (mean age 12.38 ± 3.76, all males). Fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), and axial diffusivity (AD) values were used as outcome variables. Results: Compared to the control group, TBSS demonstrated multiple areas of markedly reduced FA involving multiple long white matter tracts, entire corpus callosum, bilateral posterior thalami, and bilateral optic tracts (OTs). Notably, there were no voxels where FA was significantly increased in the autism group. Increased RD was also noted in these regions, suggesting underlying myelination defect. The MD was elevated in many of the projections and association fibers and notably in the OTs. There were no significant changes in the AD in these regions, indicating no significant axonal injury. There was no significant correlation between the FA values and Childhood Autism Rating Scale. Conclusion: This is a first of a kind study evaluating DTI findings in autistic children in India. In our study, DTI has shown a significant fault with the underlying intricate brain wiring system in autism. OT abnormality is a novel finding and needs further research. PMID:26600581

  12. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  13. A Bayesian Belief Network Approach to Explore Alternative Decisions for Sediment Control and water Storage Capacity at Lago Lucchetti, Puerto Rico

    EPA Science Inventory

    A Bayesian belief network (BBN) was developed to characterize the effects of sediment accumulation on the water storage capacity of Lago Lucchetti (located in southwest Puerto Rico) and to forecast the life expectancy (usefulness) of the reservoir under different management scena...

  14. Controlling Time-Dependent Confounding by Health Status and Frailty: Restriction Versus Statistical Adjustment.

    PubMed

    McGrath, Leah J; Ellis, Alan R; Brookhart, M Alan

    2015-07-01

    Nonexperimental studies of preventive interventions are often biased because of the healthy-user effect and, in frail populations, because of confounding by functional status. Bias is evident when estimating influenza vaccine effectiveness, even after adjustment for claims-based indicators of illness. We explored bias reduction methods while estimating vaccine effectiveness in a cohort of adult hemodialysis patients. Using the United States Renal Data System and linked data from a commercial dialysis provider, we estimated vaccine effectiveness using a Cox proportional hazards marginal structural model of all-cause mortality before and during 3 influenza seasons in 2005/2006 through 2007/2008. To improve confounding control, we added frailty indicators to the model, measured time-varying confounders at different time intervals, and restricted the sample in multiple ways. Crude and baseline-adjusted marginal structural models remained strongly biased. Restricting to a healthier population removed some unmeasured confounding; however, this reduced the sample size, resulting in wide confidence intervals. We estimated an influenza vaccine effectiveness of 9% (hazard ratio = 0.91, 95% confidence interval: 0.72, 1.15) when bias was minimized through cohort restriction. In this study, the healthy-user bias could not be controlled through statistical adjustment; however, sample restriction reduced much of the bias.

  15. Statistical diagnostics emerging from external quality control of real-time PCR.

    PubMed

    Marubini, E; Verderio, P; Raggi, Casini C; Pazzagli, M; Orlando, C

    2004-01-01

    Besides the application of conventional qualitative PCR as a valuable tool to enrich or identify specific sequences of nucleic acids, a new revolutionary technique for quantitative PCR determination has been introduced recently. It is based on real-time detection of PCR products revealed as a homogeneous accumulating signal generated by specific dyes. However, as far as we know, the influence of the variability of this technique on the reliability of the quantitative assay has not been thoroughly investigated. A national program of external quality assurance (EQA) for real-time PCR determination involving 42 Italian laboratories has been developed to assess the analytical performance of real-time PCR procedures. Participants were asked to perform a conventional experiment based on the use of an external reference curve (standard curve) for real-time detection of three cDNA samples with different concentrations of a specific target. In this paper the main analytical features of the standard curve have been investigated in an attempt to produce statistical diagnostics emerging from external quality control. Specific control charts were drawn to help biochemists take technical decisions aimed at improving the performance of their laboratories. Overall, our results indicated a subset of seven laboratories whose performance appeared to be markedly outside the limits for at least one of the standard curve features investigated. Our findings suggest the usefulness of the approach presented here for monitoring the heterogeneity of results produced by different laboratories and for selecting those laboratories that need technical advice on their performance.

  16. Reanalysis of morphine consumption from two randomized controlled trials of gabapentin using longitudinal statistical methods

    PubMed Central

    Zhang, Shiyuan; Paul, James; Nantha-Aree, Manyat; Buckley, Norman; Shahzad, Uswa; Cheng, Ji; DeBeer, Justin; Winemaker, Mitchell; Wismer, David; Punthakee, Dinshaw; Avram, Victoria; Thabane, Lehana

    2015-01-01

    Background Postoperative pain management in total joint replacement surgery remains ineffective in up to 50% of patients and has an overwhelming impact in terms of patient well-being and health care burden. We present here an empirical analysis of two randomized controlled trials assessing whether addition of gabapentin to a multimodal perioperative analgesia regimen can reduce morphine consumption or improve analgesia for patients following total joint arthroplasty (the MOBILE trials). Methods Morphine consumption, measured for four time periods in patients undergoing total hip or total knee arthroplasty, was analyzed using a linear mixed-effects model to provide a longitudinal estimate of the treatment effect. Repeated-measures analysis of variance and generalized estimating equations were used in a sensitivity analysis to compare the robustness of the methods. Results There was no statistically significant difference in morphine consumption between the treatment group and a control group (mean effect size estimate 1.0, 95% confidence interval −4.7, 6.7, P=0.73). The results remained robust across different longitudinal methods. Conclusion The results of the current reanalysis of morphine consumption align with those of the MOBILE trials. Gabapentin did not significantly reduce morphine consumption in patients undergoing major replacement surgeries. The results remain consistent across longitudinal methods. More work in the area of postoperative pain is required to provide adequate management for this patient population. PMID:25709496

  17. Bayesian Visual Odometry

    NASA Astrophysics Data System (ADS)

    Center, Julian L.; Knuth, Kevin H.

    2011-03-01

    Visual odometry refers to tracking the motion of a body using an onboard vision system. Practical visual odometry systems combine the complementary accuracy characteristics of vision and inertial measurement units. The Mars Exploration Rovers, Spirit and Opportunity, used this type of visual odometry. The visual odometry algorithms in Spirit and Opportunity were based on Bayesian methods, but a number of simplifying approximations were needed to deal with onboard computer limitations. Furthermore, the allowable motion of the rover had to be severely limited so that computations could keep up. Recent advances in computer technology make it feasible to implement a fully Bayesian approach to visual odometry. This approach combines dense stereo vision, dense optical flow, and inertial measurements. As with all true Bayesian methods, it also determines error bars for all estimates. This approach also offers the possibility of using Micro-Electro Mechanical Systems (MEMS) inertial components, which are more economical, weigh less, and consume less power than conventional inertial components.

  18. Bayesian least squares deconvolution

    NASA Astrophysics Data System (ADS)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  19. Exploring the use of statistical process control methods to assess course changes

    NASA Astrophysics Data System (ADS)

    Vollstedt, Ann-Marie

    This dissertation pertains to the field of Engineering Education. The Department of Mechanical Engineering at the University of Nevada, Reno (UNR) is hosting this dissertation under a special agreement. This study was motivated by the desire to find an improved, quantitative measure of student quality that is both convenient to use and easy to evaluate. While traditional statistical analysis tools such as ANOVA (analysis of variance) are useful, they are somewhat time consuming and are subject to error because they are based on grades, which are influenced by numerous variables, independent of student ability and effort (e.g. inflation and curving). Additionally, grades are currently the only measure of quality in most engineering courses even though most faculty agree that grades do not accurately reflect student quality. Based on a literature search, in this study, quality was defined as content knowledge, cognitive level, self efficacy, and critical thinking. Nineteen treatments were applied to a pair of freshmen classes in an effort in increase the qualities. The qualities were measured via quiz grades, essays, surveys, and online critical thinking tests. Results from the quality tests were adjusted and filtered prior to analysis. All test results were subjected to Chauvenet's criterion in order to detect and remove outlying data. In addition to removing outliers from data sets, it was felt that individual course grades needed adjustment to accommodate for the large portion of the grade that was defined by group work. A new method was developed to adjust grades within each group based on the residual of the individual grades within the group and the portion of the course grade defined by group work. It was found that the grade adjustment method agreed 78% of the time with the manual ii grade changes instructors made in 2009, and also increased the correlation between group grades and individual grades. Using these adjusted grades, Statistical Process Control

  20. Bayesian Networks for Social Modeling

    SciTech Connect

    Whitney, Paul D.; White, Amanda M.; Walsh, Stephen J.; Dalton, Angela C.; Brothers, Alan J.

    2011-03-28

    This paper describes a body of work developed over the past five years. The work addresses the use of Bayesian network (BN) models for representing and predicting social/organizational behaviors. The topics covered include model construction, validation, and use. These topics show the bulk of the lifetime of such model, beginning with construction, moving to validation and other aspects of model ‘critiquing’, and finally demonstrating how the modeling approach might be used to inform policy analysis. To conclude, we discuss limitations of using BN for this activity and suggest remedies to address those limitations. The primary benefits of using a well-developed computational, mathematical, and statistical modeling structure, such as BN, are 1) there are significant computational, theoretical and capability bases on which to build 2) ability to empirically critique the model, and potentially evaluate competing models for a social/behavioral phenomena.

  1. Bayesian Exploratory Factor Analysis

    PubMed Central

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  2. Decision generation tools and Bayesian inference

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Wang, Wenjian; Forrester, Thomas; Kostrzewski, Andrew; Veeris, Christian; Nielsen, Thomas

    2014-05-01

    Digital Decision Generation (DDG) tools are important software sub-systems of Command and Control (C2) systems and technologies. In this paper, we present a special type of DDGs based on Bayesian Inference, related to adverse (hostile) networks, including such important applications as terrorism-related networks and organized crime ones.

  3. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  4. Bayesian networks in neuroscience: a survey

    PubMed Central

    Bielza, Concha; Larrañaga, Pedro

    2014-01-01

    Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind–morphological, electrophysiological, -omics and neuroimaging–, thereby broadening the scope–molecular, cellular, structural, functional, cognitive and medical– of the brain aspects to be studied. PMID:25360109

  5. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  6. Bayesian networks in neuroscience: a survey.

    PubMed

    Bielza, Concha; Larrañaga, Pedro

    2014-01-01

    Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind-morphological, electrophysiological, -omics and neuroimaging-, thereby broadening the scope-molecular, cellular, structural, functional, cognitive and medical- of the brain aspects to be studied.

  7. Predicting coastal cliff erosion using a Bayesian probabilistic model

    USGS Publications Warehouse

    Hapke, C.; Plant, N.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70-90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale. ?? 2010.

  8. Statistical methods for launch vehicle guidance, navigation, and control (GN&C) system design and analysis

    NASA Astrophysics Data System (ADS)

    Rose, Michael Benjamin

    A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical

  9. Statistical Results on Filtering and Epi-convergence for Learning-Based Model Predictive Control

    DTIC Science & Technology

    2011-12-17

    be described by ordinary differential equations (ODE’s). The second part provides proofs concerning the epi-convergence of different statistical...and statistical identi cation (or learning) of unmodeled dynamics, for dynamical systems that can be described by ordinary di erential equations (ODE’s... differential equations (ODE’s). The second part is found in Section 4 and pro- vides proofs concerning the epi-convergence of different statistical

  10. Quality control of herbal medicines by using spectroscopic techniques and multivariate statistical analysis.

    PubMed

    Singh, Sunil Kumar; Jha, Sunil Kumar; Chaudhary, Anand; Yadava, R D S; Rai, S B

    2010-02-01

    Herbal medicines play an important role in modern human life and have significant effects on treating diseases; however, the quality and safety of these herbal products has now become a serious issue due to increasing pollution in air, water, soil, etc. The present study proposes Fourier transform infrared spectroscopy (FTIR) along with the statistical method principal component analysis (PCA) to identify and discriminate herbal medicines for quality control. Herbal plants have been characterized using FTIR spectroscopy. Characteristic peaks (strong and weak) have been marked for each herbal sample in the fingerprint region (400-2000 cm(-1)). The ratio of the areas of any two marked characteristic peaks was found to be nearly consistent for the same plant from different regions, and thus the present idea suggests an additional discrimination method for herbal medicines. PCA clusters herbal medicines into different groups, clearly showing that this method can adequately discriminate different herbal medicines using FTIR data. Toxic metal contents (Cd, Pb, Cr, and As) have been determined and the results compared with the higher permissible daily intake limit of heavy metals proposed by the World Health Organization (WHO).

  11. Feasibility study of using statistical process control to customized quality assurance in proton therapy

    SciTech Connect

    Rah, Jeong-Eun; Oh, Do Hoon; Shin, Dongho; Kim, Tae Hyun; Kim, Gwe-Ya

    2014-09-15

    Purpose: To evaluate and improve the reliability of proton quality assurance (QA) processes and, to provide an optimal customized tolerance level using the statistical process control (SPC) methodology. Methods: The authors investigated the consistency check of dose per monitor unit (D/MU) and range in proton beams to see whether it was within the tolerance level of the daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to improve the patient-specific QA process in proton beams by using process capability indices. Results: The authors established a customized tolerance level of ±2% for D/MU and ±0.5 mm for beam range in the daily proton QA process. In the authors’ analysis of the process capability indices, the patient-specific range measurements were capable of a specification limit of ±2% in clinical plans. Conclusions: SPC methodology is a useful tool for customizing the optimal QA tolerance levels and improving the quality of proton machine maintenance, treatment delivery, and ultimately patient safety.

  12. Statistical optimization of gastric floating system for oral controlled delivery of calcium.

    PubMed

    Li, S; Lin, S; Chien, Y W; Daggy, B P; Mirchandani, H L

    2001-01-13

    The development of an optimized gastric floating drug delivery system is described. Statistical experimental design and data analysis using response surface methodology is also illustrated. A central, composite Box-Wilson design for the controlled release of calcium was used with 3 formulation variables: X1 (hydroxypropyl methylcellulose [HPMC] loading), X2 (citric acid loading), and X3 (magnesium stearate loading). Twenty formulations were prepared, and dissolution studies and floating kinetics were performed on these formulations. The dissolution data obtained were then fitted to the Power Law, and floating profiles were analyzed. Diffusion exponents obtained by Power Law were used as targeted response variables, and the constraints were placed on other response variables. All 3 formulation variables were found to be significant for the release properties (P <.05), while only HPMC loading was found to be significant for floating properties. Optimization of the formulations was achieved by applying the constrained optimization. The optimized formulation delivered calcium at the release rate of 40 mg/hr, with predicted n and T50% values at 0.93 and 3.29 hours, respectively. Experimentally, calcium was observed to release from the optimized formulation with n and T50% values of 0.89 (+/- 0.10) and 3.20 (+/- 0.21) hours, which showed an excellent agreement. The quadratic mathematical model developed could be used to further predict formulations with desirable release and floating properties.

  13. Structural damage detection using extended Kalman filter combined with statistical process control

    NASA Astrophysics Data System (ADS)

    Jin, Chenhao; Jang, Shinae; Sun, Xiaorong

    2015-04-01

    Traditional modal-based methods, which identify damage based upon changes in vibration characteristics of the structure on a global basis, have received considerable attention in the past decades. However, the effectiveness of the modalbased methods is dependent on the type of damage and the accuracy of the structural model, and these methods may also have difficulties when applied to complex structures. The extended Kalman filter (EKF) algorithm which has the capability to estimate parameters and catch abrupt changes, is currently used in continuous and automatic structural damage detection to overcome disadvantages of traditional methods. Structural parameters are typically slow-changing variables under effects of operational and environmental conditions, thus it would be difficult to observe the structural damage and quantify the damage in real-time with EKF only. In this paper, a Statistical Process Control (SPC) is combined with EFK method in order to overcome this difficulty. Based on historical measurements of damage-sensitive feathers involved in the state-space dynamic models, extended Kalman filter (EKF) algorithm is used to produce real-time estimations of these features as well as standard derivations, which can then be used to form control ranges for SPC to detect any abnormality of the selected features. Moreover, confidence levels of the detection can be adjusted by choosing different times of sigma and number of adjacent out-of-range points. The proposed method is tested using simulated data of a three floors linear building in different damage scenarios, and numerical results demonstrate high damage detection accuracy and light computation of this presented method.

  14. Statistical estimate of mercury removal efficiencies for air pollution control devices of municipal solid waste incinerators.

    PubMed

    Takahashi, Fumitake; Kida, Akiko; Shimaoka, Takayuki

    2010-10-15

    Although representative removal efficiencies of gaseous mercury for air pollution control devices (APCDs) are important to prepare more reliable atmospheric emission inventories of mercury, they have been still uncertain because they depend sensitively on many factors like the type of APCDs, gas temperature, and mercury speciation. In this study, representative removal efficiencies of gaseous mercury for several types of APCDs of municipal solid waste incineration (MSWI) were offered using a statistical method. 534 data of mercury removal efficiencies for APCDs used in MSWI were collected. APCDs were categorized as fixed-bed absorber (FA), wet scrubber (WS), electrostatic precipitator (ESP), and fabric filter (FF), and their hybrid systems. Data series of all APCD types had Gaussian log-normality. The average removal efficiency with a 95% confidence interval for each APCD was estimated. The FA, WS, and FF with carbon and/or dry sorbent injection systems had 75% to 82% average removal efficiencies. On the other hand, the ESP with/without dry sorbent injection had lower removal efficiencies of up to 22%. The type of dry sorbent injection in the FF system, dry or semi-dry, did not make more than 1% difference to the removal efficiency. The injection of activated carbon and carbon-containing fly ash in the FF system made less than 3% difference. Estimation errors of removal efficiency were especially high for the ESP. The national average of removal efficiency of APCDs in Japanese MSWI plants was estimated on the basis of incineration capacity. Owing to the replacement of old APCDs for dioxin control, the national average removal efficiency increased from 34.5% in 1991 to 92.5% in 2003. This resulted in an additional reduction of about 0.86Mg emission in 2003. Further study using the methodology in this study to other important emission sources like coal-fired power plants will contribute to better emission inventories.

  15. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  16. Spectral Bayesian Knowledge Tracing

    ERIC Educational Resources Information Center

    Falakmasir, Mohammad; Yudelson, Michael; Ritter, Steve; Koedinger, Ken

    2015-01-01

    Bayesian Knowledge Tracing (BKT) has been in wide use for modeling student skill acquisition in Intelligent Tutoring Systems (ITS). BKT tracks and updates student's latent mastery of a skill as a probability distribution of a binary variable. BKT does so by accounting for observed student successes in applying the skill correctly, where success is…

  17. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  18. Bayesian natural selection and the evolution of perceptual systems.

    PubMed Central

    Geisler, Wilson S; Diehl, Randy L

    2002-01-01

    In recent years, there has been much interest in characterizing statistical properties of natural stimuli in order to better understand the design of perceptual systems. A fruitful approach has been to compare the processing of natural stimuli in real perceptual systems with that of ideal observers derived within the framework of Bayesian statistical decision theory. While this form of optimization theory has provided a deeper understanding of the information contained in natural stimuli as well as of the computational principles employed in perceptual systems, it does not directly consider the process of natural selection, which is ultimately responsible for design. Here we propose a formal framework for analysing how the statistics of natural stimuli and the process of natural selection interact to determine the design of perceptual systems. The framework consists of two complementary components. The first is a maximum fitness ideal observer, a standard Bayesian ideal observer with a utility function appropriate for natural selection. The second component is a formal version of natural selection based upon Bayesian statistical decision theory. Maximum fitness ideal observers and Bayesian natural selection are demonstrated in several examples. We suggest that the Bayesian approach is appropriate not only for the study of perceptual systems but also for the study of many other systems in biology. PMID:12028784

  19. Statistical examination of laser therapy effects in controlled double-blind clinical trial

    NASA Astrophysics Data System (ADS)

    Boerner, Ewa; Podbielska, Halina

    2001-10-01

    For the evaluation of the therapy effects the double-blind clinical trial followed by statistical analysis was performed. After statistical calculations it was stated that laser therapy with IR radiation has a significant influence on the decrease of the level of pain in the examined group of patients suffering from various locomotive diseases. The level of pain of patients undergoing laser therapy was statistically lower than the level of pain of patients undergoing placebo therapy. It means that laser therapy had statistically significant influence on the decrease of the level of pain. The same tests were performed for evaluation of movement range. Although placebo therapy contributes to the increase of the range of movement, the statistically significant influence was stated in case of the therapeutic group treated by laser.

  20. A Bayesian multi-planet Kepler periodogram for exoplanet detection

    NASA Astrophysics Data System (ADS)

    Gregory, P. C.

    2005-12-01

    A Bayesian multi-planet Kepler periodogram has been developed for the analysis of precision radial velocity data (Gregory, Ap. J., 631, 1198, 2005). The periodogram employs a parallel tempering Markov chain Monte Carlo algorithm with a novel statistical control system. Examples of its use will be presented, including a re-analysis of data for HD 208487 (Gregory, 2005b, astro-ph/0509412) for which we find strong evidence for a second planet with a period of 998-62+57 days, an eccentricity of 0.19-0.18+0.05, and an M sin i = 0.46-0.13+0.05 MJ. This research was supported in part by a grant from the Canadian Natural Sciences and Engineering Research Council of Canada at the University of British Columbia.

  1. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    SciTech Connect

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error in addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.

  2. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE PAGES

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  3. Bayesian Case-deletion Model Complexity and Information Criterion

    PubMed Central

    Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia

    2015-01-01

    We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example. PMID:26180578

  4. Bayesian multimodel inference for dose-response studies

    USGS Publications Warehouse

    Link, W.A.; Albers, P.H.

    2007-01-01

    Statistical inference in dose?response studies is model-based: The analyst posits a mathematical model of the relation between exposure and response, estimates parameters of the model, and reports conclusions conditional on the model. Such analyses rarely include any accounting for the uncertainties associated with model selection. The Bayesian inferential system provides a convenient framework for model selection and multimodel inference. In this paper we briefly describe the Bayesian paradigm and Bayesian multimodel inference. We then present a family of models for multinomial dose?response data and apply Bayesian multimodel inferential methods to the analysis of data on the reproductive success of American kestrels (Falco sparveriuss) exposed to various sublethal dietary concentrations of methylmercury.

  5. Advances in Bayesian Multiple QTL Mapping in Experimental Crosses

    PubMed Central

    Yi, Nengjun; Shriner, Daniel

    2016-01-01

    Many complex human diseases and traits of biological and/or economic importance are determined by interacting networks of multiple quantitative trait loci (QTL) and environmental factors. Mapping QTL is critical for understanding the genetic basis of complex traits, and for ultimate identification of responsible genes. A variety of sophisticated statistical methods for QTL mapping have been developed. Among these developments, the evolution of Bayesian approaches for multiple QTL mapping over the past decade has been remarkable. Bayesian methods can jointly infer the number of QTL, their genomic positions, and their genetic effects. Here, we review recently developed and still developing Bayesian methods and associated computer software for mapping multiple QTL in experimental crosses. We compare and contrast these methods to clearly describe the relationships among different Bayesian methods. We conclude this review by highlighting some areas of future research. PMID:17987056

  6. Quantum-Like Representation of Non-Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.

    2013-01-01

    This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.

  7. Validating an automated outcomes surveillance application using data from a terminated randomized, controlled trial (OPUS [TIMI-16]).

    PubMed

    Matheny, Michael E; Morrow, David A; Ohno-Machado, Lucila; Cannon, Christopher P; Resnic, Frederic S

    2007-10-11

    We sought to validate an automated outcomes surveillance system (DELTA) using OPUS (TIMI-16), a multi-center randomized, controlled trial that was stopped early due to elevated mortality in one of the two intervention arms. Methodologies that were incorporated into the application (Statistical Process Control [SPC] and Bayesian Updating Statistics [BUS]) were compared with standard Data Safety Monitoring Board (DSMB) protocols.

  8. Bayesian Correlation Analysis for Sequence Count Data

    PubMed Central

    Lau, Nelson; Perkins, Theodore J.

    2016-01-01

    Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities’ measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low—especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities’ signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset. PMID:27701449

  9. Bayesian Correlation Analysis for Sequence Count Data.

    PubMed

    Sánchez-Taltavull, Daniel; Ramachandran, Parameswaran; Lau, Nelson; Perkins, Theodore J

    2016-01-01

    Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities' measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low-especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities' signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset.

  10. Statistical analysis of site factors controlling preferential flow and transport in soils

    NASA Astrophysics Data System (ADS)

    Koestel, John; Moeys, Julien; Jarvis, Nick

    2010-05-01

    Knowledge of the solute transport characteristics of soils at field and catchment scales is important for sustainable management of environmental and agricultural resources. It is known that preferential flow and solute transport through macropores often contribute substantially to the transport of diffuse pollutants to groundwater or surface water via drains. As direct measurement of preferential flow parameters is expensive and time consuming, more easily obtainable soil properties must instead be used as surrogates to predict preferential flow, using so-called pedotransfer functions (PTFs). However, there is evidence that the soil properties used in classical PTFs (soil texture, bulk density, fraction of organic matter) are not sufficient to infer the flow and transport characteristics of near-saturated and saturated soils (e.g. Weynants, M. et al., 2009. Revisiting Vereecken Pedotransfer Functions: Introducing a Closed-Form Hydraulic Model. Vadose Zone Journal 8: 86-95). Rather, the incorporation of additional data such as land use, soil-biota, and soil-type information appears to be necessary (e.g. Jarvis, N. J. et al., 2009. A conceptual model of soil susceptibility to macropore flow. Vadose Zone Journal 8: 902-910). In this study, we make use of a database comprising results of breakthrough curve experiments and corresponding site (parent material, climate, land use, etc.) and soil properties (texture, bulk density, etc.) published in the peer-reviewed literature to identify through statistical analyses (e.g. factorial and cluster analysis) the key soil properties and site attributes that control susceptibility to preferential flow.

  11. Statistical quality control charts for liver transplant process indicators: evaluation of a single-center experience.

    PubMed

    Varona, M A; Soriano, A; Aguirre-Jaime, A; Barrera, M A; Medina, M L; Bañon, N; Mendez, S; Lopez, E; Portero, J; Dominguez, D; Gonzalez, A

    2012-01-01

    Liver transplantation, the best option for many end-stage liver diseases, is indicated in more candidates than the donor availability. In this situation, this demanding treatment must achieve excellence, accessibility and patient satisfaction to be ethical, scientific, and efficient. The current consensus of quality measurements promoted by the Sociedad Española de Trasplante Hepático (SETH) seeks to depict criteria, indicators, and standards for liver transplantation in Spain. According to this recommendation, the Canary Islands liver program has studied its experience. We separated the 411 cadaveric transplants performed in the last 15 years into 2 groups: The first 100 and the other 311. The 8 criteria of SETH 2010 were correctly fulfilled. In most indicators, the outcomes were favorable, with an actuarial survivals at 1, 3, 5, and 10 years of 84%, 79%, 76%, and 65%, respectively; excellent results in retransplant rates (early 0.56% and long-term 5.9%), primary nonfunction rate (0.43%), waiting list mortality (13.34%), and patient satisfaction (91.5%). On the other hand, some indicators of mortality were worse as perioperative, postoperative, and early mortality with normal graft function and reoperation rate. After the analyses of the series with statistical quality control charts, we observed an improvement in all indicators, even in the apparently worst, early mortality with normal graft functions in a stable program. Such results helped us to discover specific areas to improve the program. The application of the quality measurement, as SETH consensus recommends, has shown in our study that despite being a consuming time process, it is a useful tool.

  12. Using Statistical Process Control for detecting anomalies in multivariate spatiotemporal Earth Observations

    NASA Astrophysics Data System (ADS)

    Flach, Milan; Mahecha, Miguel; Gans, Fabian; Rodner, Erik; Bodesheim, Paul; Guanche-Garcia, Yanira; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus

    2016-04-01

    The number of available Earth observations (EOs) is currently substantially increasing. Detecting anomalous patterns in these multivariate time series is an important step in identifying changes in the underlying dynamical system. Likewise, data quality issues might result in anomalous multivariate data constellations and have to be identified before corrupting subsequent analyses. In industrial application a common strategy is to monitor production chains with several sensors coupled to some statistical process control (SPC) algorithm. The basic idea is to raise an alarm when these sensor data depict some anomalous pattern according to the SPC, i.e. the production chain is considered 'out of control'. In fact, the industrial applications are conceptually similar to the on-line monitoring of EOs. However, algorithms used in the context of SPC or process monitoring are rarely considered for supervising multivariate spatio-temporal Earth observations. The objective of this study is to exploit the potential and transferability of SPC concepts to Earth system applications. We compare a range of different algorithms typically applied by SPC systems and evaluate their capability to detect e.g. known extreme events in land surface processes. Specifically two main issues are addressed: (1) identifying the most suitable combination of data pre-processing and detection algorithm for a specific type of event and (2) analyzing the limits of the individual approaches with respect to the magnitude, spatio-temporal size of the event as well as the data's signal to noise ratio. Extensive artificial data sets that represent the typical properties of Earth observations are used in this study. Our results show that the majority of the algorithms used can be considered for the detection of multivariate spatiotemporal events and directly transferred to real Earth observation data as currently assembled in different projects at the European scale, e.g. http://baci-h2020.eu

  13. A critique of statistical hypothesis testing in clinical research

    PubMed Central

    Raha, Somik

    2011-01-01

    Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152

  14. Bayesian truthing and experimental validation in homeland security and defense

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas; Wang, Wenjian; Kostrzewski, Andrew; Pradhan, Ranjit

    2014-05-01

    In this paper we discuss relations between Bayesian Truthing (experimental validation), Bayesian statistics, and Binary Sensing in the context of selected Homeland Security and Intelligence, Surveillance, Reconnaissance (ISR) optical and nonoptical application scenarios. The basic Figure of Merit (FoM) is Positive Predictive Value (PPV), as well as false positives and false negatives. By using these simple binary statistics, we can analyze, classify, and evaluate a broad variety of events including: ISR; natural disasters; QC; and terrorism-related, GIS-related, law enforcement-related, and other C3I events.

  15. Bayesian parameter estimation for effective field theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.

    2016-07-01

    We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  16. Advanced Bayesian Method for Planetary Surface Navigation

    NASA Technical Reports Server (NTRS)

    Center, Julian

    2015-01-01

    Autonomous Exploration, Inc., has developed an advanced Bayesian statistical inference method that leverages current computing technology to produce a highly accurate surface navigation system. The method combines dense stereo vision and high-speed optical flow to implement visual odometry (VO) to track faster rover movements. The Bayesian VO technique improves performance by using all image information rather than corner features only. The method determines what can be learned from each image pixel and weighs the information accordingly. This capability improves performance in shadowed areas that yield only low-contrast images. The error characteristics of the visual processing are complementary to those of a low-cost inertial measurement unit (IMU), so the combination of the two capabilities provides highly accurate navigation. The method increases NASA mission productivity by enabling faster rover speed and accuracy. On Earth, the technology will permit operation of robots and autonomous vehicles in areas where the Global Positioning System (GPS) is degraded or unavailable.

  17. Quantum-like Representation of Bayesian Updating

    NASA Astrophysics Data System (ADS)

    Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu; Khrennikov, Andrei; Basieva, Irina

    2011-03-01

    Recently, applications of quantum mechanics to coginitive psychology have been discussed, see [1]-[11]. It was known that statistical data obtained in some experiments of cognitive psychology cannot be described by classical probability model (Kolmogorov's model) [12]-[15]. Quantum probability is one of the most advanced mathematical models for non-classical probability. In the paper of [11], we proposed a quantum-like model describing decision-making process in a two-player game, where we used the generalized quantum formalism based on lifting of density operators [16]. In this paper, we discuss the quantum-like representation of Bayesian inference, which has been used to calculate probabilities for decision making under uncertainty. The uncertainty is described in the form of quantum superposition, and Bayesian updating is explained as a reduction of state by quantum measurement.

  18. Capability index--a statistical process control tool to aid in udder health control in dairy herds.

    PubMed

    Niza-Ribeiro, J; Noordhuizen, J P T M; Menezes, J C

    2004-08-01

    Bulk milk somatic cell count (BMSCC) averages have been used to evaluate udder health both at the individual or the herd level as well as milk quality and hygiene. The authors show that the BMSCC average is not the best tool to be used in udder health control programs and that it can be replaced with advantage by the capability index (Cpk). The Cpk is a statistical process control tool traditionally used by engineers to validate, monitor, and predict the expected behavior of processes or machines. The BMSCC data of 13 consecutive months of production from 414 dairy herds as well as SCC from all cows in the DHI program from 264 herds in the same period were collected. The Cpk and the annual BMSCC average (AAVG) of all the herds were calculated. Confronting the herd's performance explained by the Cpk and AAVG with the European Union (EU) official limit for BMSCC of 400,000 cells/mL, it was noticed that the Cpk accurately classified the compliance of the 414 farms, whereas the AAVG misclassified 166 (40%) of the 414 selected farms. The annual prevalence of subclinical mastitis (SMP) of each herd was calculated with individual SCC data from the same 13-mo period. Cows with more than 200,000 SCC/mL were considered as having subclinical mastitis. A logistic regression model to relate the Cpk and the herd's subclinical mastitis prevalence was calculated. The model is: SMPe = 0.475 e(-0.5286 x Cpk). The validation of the model was carried out evaluating the relation between the observed SMP and the predicted SMPe, in terms of the linear correlation coefficient (R2) and the mean difference between SMP and SMPe (i.e., mean square error of prediction). The validation suggests that our model can be used to estimate the herd's SMP with the herd's Cpk. The Cpk equation relates the herd's BMSCC with the EU official SCC limit, thus the logistic regression model enables the adoption of critical limits for subclinical mastitis, taking into consideration the legal standard for SCC.

  19. Bayesian Methods for Analyzing Structural Equation Models with Covariates, Interaction, and Quadratic Latent Variables

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng

    2007-01-01

    The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…

  20. Statistical Characteristics of Experimental Geysers: Factors Controlling Mass and Style of Eruption

    NASA Astrophysics Data System (ADS)

    Toramaru, A.; Maeda, K.

    2011-12-01

    to volume of empty at the conduit top. Once the eruption is triggered, mass proportional to number fraction φ D of parcels with temperature higher than decompression boiling point under free of conduit water is erupted. Assuming that the rectangular area from the top to the depth n (φ S}+φ {D) of square flask is erupted, the erupted mass is calculated by n ± n (φ S}+φ {D). The sum of φ S}+φ {D and the enclosed area is defined as explosive mass which contributes to the explosivity of eruption, and the explosivity index (EI) is calculated by the mass ratio of (explosive mass)/(erupted mass). Presuming a specific system, we carried out Monte Carlo simulations with varying average and variance of temperature PDF as parameters to obtain the statistical properties of erupted mass and EI. As a result, we find that a system with higher average temperature and smaller variance produces mostly explosive eruptions with larger mass with Gaussian type of frequency distribution. Thus, from results of laboratory experiments and simulations, we conclude that the spatial heterogeneity of supersaturated state or temperature in chamber is a key factor to control the eruption style and erupted mass.

  1. Hierarchical Approximate Bayesian Computation

    PubMed Central

    Turner, Brandon M.; Van Zandt, Trisha

    2013-01-01

    Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436

  2. Quantum Bayesian implementation

    NASA Astrophysics Data System (ADS)

    Wu, Haoyang

    2013-02-01

    Mechanism design is a reverse problem of game theory. Nash implementation and Bayesian implementation are two important parts of mechanism design theory. The former one corresponds to a setting with complete information, whereas the latter one corresponds to a setting with incomplete information. A recent work Wu (Int J Quantum Inf 9:615-623, 2011) shows that when an additional condition is satisfied, the traditional sufficient conditions for Nash implementation will fail in a quantum domain. Inspired by this work, in this paper we will propose that the traditional sufficient conditions for Bayesian implementation will also fail if agents use quantum strategies to send messages to the designer through channels (e.g., Internet, cable etc) and two additional conditions are satisfied.

  3. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    ERIC Educational Resources Information Center

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  4. Economic Statistical Design of Integrated X-bar-S Control Chart with Preventive Maintenance and General Failure Distribution

    PubMed Central

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082

  5. Economic Statistical Design of integrated X-bar-S control chart with Preventive Maintenance and general failure distribution.

    PubMed

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC.

  6. Treatment of control data in lunar phototriangulation. [application of statistical procedures and development of mathematical and computer techniques

    NASA Technical Reports Server (NTRS)

    Wong, K. W.

    1974-01-01

    In lunar phototriangulation, there is a complete lack of accurate ground control points. The accuracy analysis of the results of lunar phototriangulation must, therefore, be completely dependent on statistical procedure. It was the objective of this investigation to examine the validity of the commonly used statistical procedures, and to develop both mathematical techniques and computer softwares for evaluating (1) the accuracy of lunar phototriangulation; (2) the contribution of the different types of photo support data on the accuracy of lunar phototriangulation; (3) accuracy of absolute orientation as a function of the accuracy and distribution of both the ground and model points; and (4) the relative slope accuracy between any triangulated pass points.

  7. The frequentist implications of optional stopping on Bayesian hypothesis tests.

    PubMed

    Sanborn, Adam N; Hills, Thomas T

    2014-04-01

    Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.

  8. Bayesian Quantitative Electrophysiology and Its Multiple Applications in Bioengineering

    PubMed Central

    Barr, Roger C.; Nolte, Loren W.; Pollard, Andrew E.

    2014-01-01

    Bayesian interpretation of observations began in the early 1700s, and scientific electrophysiology began in the late 1700s. For two centuries these two fields developed mostly separately. In part that was because quantitative Bayesian interpretation, in principle a powerful method of relating measurements to their underlying sources, often required too many steps to be feasible with hand calculation in real applications. As computer power became widespread in the later 1900s, Bayesian models and interpretation moved rapidly but unevenly from the domain of mathematical statistics into applications. Use of Bayesian models now is growing rapidly in electrophysiology. Bayesian models are well suited to the electrophysiological environment, allowing a direct and natural way to express what is known (and unknown) and to evaluate which one of many alternatives is most likely the source of the observations, and the closely related receiver operating characteristic (ROC) curve is a powerful tool in making decisions. Yet, in general, many people would ask what such models are for, in electrophysiology, and what particular advantages such models provide. So to examine this question in particular, this review identifies a number of electrophysiological papers in bio-engineering arising from questions in several organ systems to see where Bayesian electrophysiological models or ROC curves were important to the results that were achieved. PMID:22275206

  9. Bayesian approaches in medical device clinical trials: a discussion with examples in the regulatory setting.

    PubMed

    Bonangelino, Pablo; Irony, Telba; Liang, Shengde; Li, Xuefeng; Mukhi, Vandana; Ruan, Shiling; Xu, Yunling; Yang, Xiting; Wang, Chenguang

    2011-09-01

    Challenging statistical issues often arise in the design and analysis of clinical trials to assess safety and effectiveness of medical devices in the regulatory setting. The use of Bayesian methods in the design and analysis of medical device clinical trials has been increasing significantly in the past decade, not only due to the availability of prior information, but mainly due to the appealing nature of Bayesian clinical trial designs. The Center for Devices and Radiological Health at the Food and Drug Administration (FDA) has gained extensive experience with the use of Bayesian statistical methods and has identified some important issues that need further exploration. In this article, we discuss several topics relating to the use of Bayesian statistical methods in medical device trials, based on our experience and real applications. We illustrate the benefits and challenges of Bayesian approaches when incorporating prior information to evaluate the effectiveness and safety of a medical device. We further present an example of a Bayesian adaptive clinical trial and compare it to a traditional frequentist design. Finally, we discuss the use of Bayesian hierarchical models for multiregional trials and highlight the advantages of the Bayesian approach when specifying clinically relevant study hypotheses.

  10. What's the best statistic for a simple test of genetic association in a case-control study?

    PubMed

    Kuo, Chia-Ling; Feingold, Eleanor

    2010-04-01

    Genome-wide genetic association studies typically start with univariate statistical tests of each marker. In principle, this single-SNP scanning is statistically straightforward--the testing is done with standard methods (e.g. chi(2) tests, regression) that have been well studied for decades. However, a number of different tests and testing procedures can be used. In a case-control study, one can use a 1 df allele-based test, a 1 or 2 df genotype-based test, or a compound procedure that combines two or more of these statistics. Additionally, most of the tests can be performed with or without covariates included in the model. While there are a number of statistical papers that make power comparisons among subsets of these methods, none has comprehensively tackled the question of which of the methods in common use is best suited to univariate scanning in a genome-wide association study. In this paper, we consider a wide variety of realistic test procedures, and first compare the power of the different procedures to detect a single locus under different genetic models. We then address the question of whether or when it is a good idea to include covariates in the analysis. We conclude that the most commonly used approach to handle covariates--modeling covariate main effects but not interactions--is almost never a good idea. Finally, we consider the performance of the statistics in a genome scan context.

  11. Trends in epidemiology in the 21st century: time to adopt Bayesian methods.

    PubMed

    Martinez, Edson Zangiacomi; Achcar, Jorge Alberto

    2014-04-01

    2013 marked the 250th anniversary of the presentation of Bayes' theorem by the philosopher Richard Price. Thomas Bayes was a figure little known in his own time, but in the 20th century the theorem that bears his name became widely used in many fields of research. The Bayes theorem is the basis of the so-called Bayesian methods, an approach to statistical inference that allows studies to incorporate prior knowledge about relevant data characteristics into statistical analysis. Nowadays, Bayesian methods are widely used in many different areas such as astronomy, economics, marketing, genetics, bioinformatics and social sciences. This study observed that a number of authors discussed recent advances in techniques and the advantages of Bayesian methods for the analysis of epidemiological data. This article presents an overview of Bayesian methods, their application to epidemiological research and the main areas of epidemiology which should benefit from the use of Bayesian methods in coming years.

  12. Personalized Multi-Student Improvement Based on Bayesian Cybernetics

    ERIC Educational Resources Information Center

    Kaburlasos, Vassilis G.; Marinagi, Catherine C.; Tsoukalas, Vassilis Th.

    2008-01-01

    This work presents innovative cybernetics (feedback) techniques based on Bayesian statistics for drawing questions from an Item Bank towards personalized multi-student improvement. A novel software tool, namely "Module for Adaptive Assessment of Students" (or, "MAAS" for short), implements the proposed (feedback) techniques. In conclusion, a pilot…

  13. Model Criticism of Bayesian Networks with Latent Variables.

    ERIC Educational Resources Information Center

    Williamson, David M.; Mislevy, Robert J.; Almond, Russell G.

    This study investigated statistical methods for identifying errors in Bayesian networks (BN) with latent variables, as found in intelligent cognitive assessments. BN, commonly used in artificial intelligence systems, are promising mechanisms for scoring constructed-response examinations. The success of an intelligent assessment or tutoring system…

  14. Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum

    2011-01-01

    Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…

  15. Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization

    ERIC Educational Resources Information Center

    Gelman, Andrew; Lee, Daniel; Guo, Jiqiang

    2015-01-01

    Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users' and developers'…

  16. A Comparison of Imputation Methods for Bayesian Factor Analysis Models

    ERIC Educational Resources Information Center

    Merkle, Edgar C.

    2011-01-01

    Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…

  17. [Logistic regression against a divergent Bayesian network].

    PubMed

    Sánchez Trujillo, Noel Antonio

    2015-02-03

    This article is a discussion about two statistical tools used for prediction and causality assessment: logistic regression and Bayesian networks. Using data of a simulated example from a study assessing factors that might predict pulmonary emphysema (where fingertip pigmentation and smoking are considered); we posed the following questions. Is pigmentation a confounding, causal or predictive factor? Is there perhaps another factor, like smoking, that confounds? Is there a synergy between pigmentation and smoking? The results, in terms of prediction, are similar with the two techniques; regarding causation, differences arise. We conclude that, in decision-making, the sum of both: a statistical tool, used with common sense, and previous evidence, taking years or even centuries to develop; is better than the automatic and exclusive use of statistical resources.

  18. Using statistical process control to demonstrate the effect of operational interventions on quality indicators in the emergency department.

    PubMed

    Schwab, R A; DelSorbo, S M; Cunningham, M R; Craven, K; Watson, W A

    1999-01-01

    When our emergency department (ED) initiated a continuous quality improvement (CQI) program, we selected as a quality indicator the percentage of patients leaving without being seen (LWBS) by a physician. Because the primary reason for LWBS patients was determined to be dissatisfaction with waiting time, we devised four interventions in clinical operations to decrease delays in patient flow through the ED. Statistical process control (SPC) methodology was then used to assess the effect of these interventions. Because baseline data were available, we constructed control charts of the percentage of LWBS patients versus consecutive months beginning in January 1990 with the mean percentage of LWBS patients and upper and lower control limits. Postintervention data, plotted using control statistics from the baseline period, demonstrated sustained special-cause variation, indicating a fundamental change in the overall system. A new control chart was then constructed using postintervention data. A significantly lowered mean percentage LWBS and a narrowed control limit range were observed, leading to the conclusion that the interventions improved the quality of care as measured by a reduction in percentage LWBS.

  19. Bayesian Approach for Inconsistent Information

    PubMed Central

    Stein, M.; Beer, M.; Kreinovich, V.

    2013-01-01

    In engineering situations, we usually have a large amount of prior knowledge that needs to be taken into account when processing data. Traditionally, the Bayesian approach is used to process data in the presence of prior knowledge. Sometimes, when we apply the traditional Bayesian techniques to engineering data, we get inconsistencies between the data and prior knowledge. These inconsistencies are usually caused by the fact that in the traditional approach, we assume that we know the exact sample values, that the prior distribution is exactly known, etc. In reality, the data is imprecise due to measurement errors, the prior knowledge is only approximately known, etc. So, a natural way to deal with the seemingly inconsistent information is to take this imprecision into account in the Bayesian approach – e.g., by using fuzzy techniques. In this paper, we describe several possible scenarios for fuzzifying the Bayesian approach. Particular attention is paid to the interaction between the estimated imprecise parameters. In this paper, to implement the corresponding fuzzy versions of the Bayesian formulas, we use straightforward computations of the related expression – which makes our computations reasonably time-consuming. Computations in the traditional (non-fuzzy) Bayesian approach are much faster – because they use algorithmically efficient reformulations of the Bayesian formulas. We expect that similar reformulations of the fuzzy Bayesian formulas will also drastically decrease the computation time and thus, enhance the practical use of the proposed methods. PMID:24089579

  20. Searching Algorithm Using Bayesian Updates

    ERIC Educational Resources Information Center

    Caudle, Kyle

    2010-01-01

    In late October 1967, the USS Scorpion was lost at sea, somewhere between the Azores and Norfolk Virginia. Dr. Craven of the U.S. Navy's Special Projects Division is credited with using Bayesian Search Theory to locate the submarine. Bayesian Search Theory is a straightforward and interesting application of Bayes' theorem which involves searching…

  1. A Bayesian network approach to determine environmental factors controlling Karenia selliformis occurrences and blooms in the Gulf of Gabès, Tunisia.

    PubMed

    Feki-Sahnoun, Wafa; Hamza, Asma; Njah, Hasna; Barraj, Nouha; Mahfoudi, Mabrouka; Rebai, Ahmed; Hassen, Malika Bel

    2017-03-01

    A Bayesian Network modeling framework is introduced to explore the effect of physical and meteorological factors on the dinoflagellate red tide forming Karenia selliformis in various sampling sites of the national phytoplankton monitoring program. The proposed models took into account the physical environment effects (salinity, temperature and tide amplitude), meteorological constraints (evaporation, air temperature, insolation, rainfall, atmospheric pressure and humidity), sampling months and sites on both Karenia selliformis occurrences and blooms. The models produced plausible results and enabled the identification of the factors that directly impacted on the species occurrences and concentration levels. The sampling sites dominated the species occurrences. The models show that the relationship between salinity and Karenia selliformis is more apparent when the species concentrations are focused on and that the bloom occurrences can be predicted based on salinity. Concentrations up to 10(5) cells L(-1) were recorded when salinity exceeded 42.5 and dominated the shallow and weak water renewal areas.

  2. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    PubMed Central

    2010-01-01

    Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods

  3. Adaptive Dynamic Bayesian Networks

    SciTech Connect

    Ng, B M

    2007-10-26

    A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.

  4. The Bayesian Covariance Lasso.

    PubMed

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  5. The Bayesian Covariance Lasso

    PubMed Central

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G.

    2012-01-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size (n) is less than the dimension (d), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data. PMID:24551316

  6. Elements of Statistics

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2016-05-01

    This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.

  7. A statistical learning strategy for closed-loop control of fluid flows

    NASA Astrophysics Data System (ADS)

    Guéniat, Florimond; Mathelin, Lionel; Hussaini, M. Yousuff

    2016-12-01

    This work discusses a closed-loop control strategy for complex systems utilizing scarce and streaming data. A discrete embedding space is first built using hash functions applied to the sensor measurements from which a Markov process model is derived, approximating the complex system's dynamics. A control strategy is then learned using reinforcement learning once rewards relevant with respect to the control objective are identified. This method is designed for experimental configurations, requiring no computations nor prior knowledge of the system, and enjoys intrinsic robustness. It is illustrated on two systems: the control of the transitions of a Lorenz'63 dynamical system, and the control of the drag of a cylinder flow. The method is shown to perform well.

  8. A Bayesian Approach to Identifying New Risk Factors for Dementia

    PubMed Central

    Wen, Yen-Hsia; Wu, Shihn-Sheng; Lin, Chun-Hung Richard; Tsai, Jui-Hsiu; Yang, Pinchen; Chang, Yang-Pei; Tseng, Kuan-Hua

    2016-01-01

    Abstract Dementia is one of the most disabling and burdensome health conditions worldwide. In this study, we identified new potential risk factors for dementia from nationwide longitudinal population-based data by using Bayesian statistics. We first tested the consistency of the results obtained using Bayesian statistics with those obtained using classical frequentist probability for 4 recognized risk factors for dementia, namely severe head injury, depression, diabetes mellitus, and vascular diseases. Then, we used Bayesian statistics to verify 2 new potential risk factors for dementia, namely hearing loss and senile cataract, determined from the Taiwan's National Health Insurance Research Database. We included a total of 6546 (6.0%) patients diagnosed with dementia. We observed older age, female sex, and lower income as independent risk factors for dementia. Moreover, we verified the 4 recognized risk factors for dementia in the older Taiwanese population; their odds ratios (ORs) ranged from 3.469 to 1.207. Furthermore, we observed that hearing loss (OR = 1.577) and senile cataract (OR = 1.549) were associated with an increased risk of dementia. We found that the results obtained using Bayesian statistics for assessing risk factors for dementia, such as head injury, depression, DM, and vascular diseases, were consistent with those obtained using classical frequentist probability. Moreover, hearing loss and senile cataract were found to be potential risk factors for dementia in the older Taiwanese population. Bayesian statistics could help clinicians explore other potential risk factors for dementia and for developing appropriate treatment strategies for these patients. PMID:27227925

  9. Comparisons of neurodegeneration over time between healthy ageing and Alzheimer's disease cohorts via Bayesian inference

    PubMed Central

    Mengersen, Kerrie

    2017-01-01

    Objectives In recent years, large-scale longitudinal neuroimaging studies have improved our understanding of healthy ageing and pathologies including Alzheimer's disease (AD). A particular focus of these studies is group differences and identification of participants at risk of deteriorating to a worse diagnosis. For this, statistical analysis using linear mixed-effects (LME) models are used to account for correlated observations from individuals measured over time. A Bayesian framework for LME models in AD is introduced in this paper to provide additional insight often not found in current LME volumetric analyses. Setting and participants Longitudinal neuroimaging case study of ageing was analysed in this research on 260 participants diagnosed as either healthy controls (HC), mild cognitive impaired (MCI) or AD. Bayesian LME models for the ventricle and hippocampus regions were used to: (1) estimate how the volumes of these regions change over time by diagnosis, (2) identify high-risk non-AD individuals with AD like degeneration and (3) determine probabilistic trajectories of diagnosis groups over age. Results We observed (1) large differences in the average rate of change of volume for the ventricle and hippocampus regions between diagnosis groups, (2) high-risk individuals who had progressed from HC to MCI and displayed similar rates of deterioration as AD counterparts, and (3) critical time points which indicate where deterioration of regions begins to diverge between the diagnosis groups. Conclusions To the best of our knowledge, this is the first application of Bayesian LME models to neuroimaging data which provides inference on a population and individual level in the AD field. The application of a Bayesian LME framework allows for additional information to be extracted from longitudinal studies. This provides health professionals with valuable information of neurodegeneration stages, and a potential to provide a better understanding of disease pathology

  10. Human Balance out of Equilibrium: Nonequilibrium Statistical Mechanics in Posture Control

    NASA Astrophysics Data System (ADS)

    Lauk, Michael; Chow, Carson C.; Pavlik, Ann E.; Collins, James J.

    1998-01-01

    During quiet standing, the human body sways in a stochastic manner. Here we show that the fluctuation-dissipation theorem can be applied to the human postural control system. That is, the dynamic response of the postural system to a weak mechanical perturbation can be predicted from the fluctuations exhibited by the system under quasistatic conditions. We also show that the estimated correlation and response functions can be described by a simple stochastic model consisting of a pinned polymer. These findings suggest that the postural control system utilizes the same control mechanisms under quiet-standing and dynamic conditions.

  11. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources

    PubMed Central

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-01-01

    Young’s double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources. PMID:27021589

  12. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    PubMed

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  13. Embedding the results of focussed Bayesian fusion into a global context

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael

    2014-05-01

    Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.

  14. The utilization of six sigma and statistical process control techniques in surgical quality improvement.

    PubMed

    Sedlack, Jeffrey D

    2010-01-01

    Surgeons have been slow to incorporate industrial reliability techniques. Process control methods were applied to surgeon waiting time between cases, and to length of stay (LOS) after colon surgery. Waiting times between surgeries were evaluated by auditing the operating room records of a single hospital over a 1-month period. The medical records of 628 patients undergoing colon surgery over a 5-year period were reviewed. The average surgeon wait time between cases was 53 min, and the busiest surgeon spent 291/2 hr in 1 month waiting between surgeries. Process control charting demonstrated poor overall control of the room turnover process. Average LOS after colon resection also demonstrated very poor control. Mean LOS was 10 days. Weibull's conditional analysis revealed a conditional LOS of 9.83 days. Serious process management problems were identified in both analyses. These process issues are both expensive and adversely affect the quality of service offered by the institution. Process control mechanisms were suggested or implemented to improve these surgical processes. Industrial reliability and quality management tools can easily and effectively identify process control problems that occur on surgical services.

  15. NOTE FROM THE EDITOR: Bayesian and Maximum Entropy Methods Bayesian and Maximum Entropy Methods

    NASA Astrophysics Data System (ADS)

    Dobrzynski, L.

    2008-10-01

    The Bayesian and Maximum Entropy Methods are now standard routines in various data analyses, irrespective of ones own preference to the more conventional approach based on so-called frequentists understanding of the notion of the probability. It is not the purpose of the Editor to show all achievements of these methods in various branches of science, technology and medicine. In the case of condensed matter physics most of the oldest examples of Bayesian analysis can be found in the excellent tutorial textbooks by Sivia and Skilling [1], and Bretthorst [2], while the application of the Maximum Entropy Methods were described in `Maximum Entropy in Action' [3]. On the list of questions addressed one finds such problems as deconvolution and reconstruction of the complicated spectra, e.g. counting the number of lines hidden within the spectrum observed with always finite resolution, reconstruction of charge, spin and momentum density distribution from an incomplete sets of data, etc. On the theoretical side one might find problems like estimation of interatomic potentials [4], application of the MEM to quantum Monte Carlo data [5], Bayesian approach to inverse quantum statistics [6], very general to statistical mechanics [7] etc. Obviously, in spite of the power of the Bayesian and Maximum Entropy Methods, it is not possible for everything to be solved in a unique way by application of these particular methods of analysis, and one of the problems which is often raised is connected not only with a uniqueness of a reconstruction of a given distribution (map) but also with its accuracy (error maps). In this `Comments' section we present a few papers showing more recent advances and views, and highlighting some of the aforementioned problems. References [1] Sivia D S and Skilling J 2006 Data Analysis: A Bayesian Tutorial 2nd edn (Oxford: Oxford University Press) [2] Bretthorst G L 1988 Bayesian Spectruim Analysis and Parameter Estimation (Berlin: Springer) [3] Buck B and

  16. [Prudent use price controls in Chinese medicines market: based on statistical data analysis].

    PubMed

    Yang, Guang; Wang, Nuo; Huang, Lu-Qi; Qiu, Hong-Yan; Guo, Lan-Ping

    2014-01-01

    A dispute about the decreasing-price problem of traditional Chinese medicine (TCM) has recently arisen. This article analyzes the statistical data of 1995-2011 in China, the results showed that the main responsibility of expensive health care has no direct relationship with the drug price. The price index of TCM rose significantly slower than the medicine prices, the production margins of TCM affected by the material prices has been diminishing since 1995, continuous price reduction will further depress profits of the TCM industry. Considering the pros and cons of raw materials vary greatly in price, decreasing medicine price behavior will force enterprises to use inferior materials in order to maintain corporate profits. The results have the guiding meaning to medicine price management.

  17. A Bayesian sequential design with binary outcome.

    PubMed

    Zhu, Han; Yu, Qingzhao; Mercante, Donald E

    2017-03-02

    Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha-spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha-spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.

  18. Evaluating traditional Chinese medicine using modern clinical trial design and statistical methodology: application to a randomized controlled acupuncture trial.

    PubMed

    Lao, Lixing; Huang, Yi; Feng, Chiguang; Berman, Brian M; Tan, Ming T

    2012-03-30

    Traditional Chinese medicine (TCM), used in China and other Asian counties for thousands of years, is increasingly utilized in Western countries. However, due to inherent differences in how Western medicine and this ancient modality are practiced, employing the so-called Western medicine-based gold standard research methods to evaluate TCM is challenging. This paper is a discussion of the obstacles inherent in the design and statistical analysis of clinical trials of TCM. It is based on our experience in designing and conducting a randomized controlled clinical trial of acupuncture for post-operative dental pain control in which acupuncture was shown to be statistically and significantly better than placebo in lengthening the median survival time to rescue drug. We demonstrate here that PH assumptions in the common Cox model did not hold in that trial and that TCM trials warrant more thoughtful modeling and more sophisticated models of statistical analysis. TCM study design entails all the challenges encountered in trials of drugs, devices, and surgical procedures in the Western medicine. We present possible solutions to some but leave many issues unresolved.

  19. Development of Statistical Process Control Methodology for an Environmentally Compliant Surface Cleaning Process in a Bonding Laboratory

    NASA Technical Reports Server (NTRS)

    Hutchens, Dale E.; Doan, Patrick A.; Boothe, Richard E.

    1997-01-01

    Bonding labs at both MSFC and the northern Utah production plant prepare bond test specimens which simulate or witness the production of NASA's Reusable Solid Rocket Motor (RSRM). The current process for preparing the bonding surfaces employs 1,1,1-trichloroethane vapor degreasing, which simulates the current RSRM process. Government regulations (e.g., the 1990 Amendments to the Clean Air Act) have mandated a production phase-out of a number of ozone depleting compounds (ODC) including 1,1,1-trichloroethane. In order to comply with these regulations, the RSRM Program is qualifying a spray-in-air (SIA) precision cleaning process using Brulin 1990, an aqueous blend of surfactants. Accordingly, surface preparation prior to bonding process simulation test specimens must reflect the new production cleaning process. The Bonding Lab Statistical Process Control (SPC) program monitors the progress of the lab and its capabilities, as well as certifies the bonding technicians, by periodically preparing D6AC steel tensile adhesion panels with EA-91 3NA epoxy adhesive using a standardized process. SPC methods are then used to ensure the process is statistically in control, thus producing reliable data for bonding studies, and identify any problems which might develop. Since the specimen cleaning process is being changed, new SPC limits must be established. This report summarizes side-by-side testing of D6AC steel tensile adhesion witness panels and tapered double cantilevered beams (TDCBs) using both the current baseline vapor degreasing process and a lab-scale spray-in-air process. A Proceco 26 inches Typhoon dishwasher cleaned both tensile adhesion witness panels and TDCBs in a process which simulates the new production process. The tests were performed six times during 1995, subsequent statistical analysis of the data established new upper control limits (UCL) and lower control limits (LCL). The data also demonstrated that the new process was equivalent to the vapor

  20. A Novel Hybrid Statistical Particle Swarm Optimization for Multimodal Functions and Frequency Control of Hybrid Wind-Solar System

    NASA Astrophysics Data System (ADS)

    Verma, Harish Kumar; Jain, Cheshta

    2016-09-01

    In this article, a hybrid algorithm of particle swarm optimization (PSO) with statistical parameter (HSPSO) is proposed. Basic PSO for shifted multimodal problems have low searching precision due to falling into a number of local minima. The proposed approach uses statistical characteristics to update the velocity of the particle to avoid local minima and help particles to search global optimum with improved convergence. The performance of the newly developed algorithm is verified using various standard multimodal, multivariable, shifted hybrid composition benchmark problems. Further, the comparative analysis of HSPSO with variants of PSO is tested to control frequency of hybrid renewable energy system which comprises solar system, wind system, diesel generator, aqua electrolyzer and ultra capacitor. A significant improvement in convergence characteristic of HSPSO algorithm over other variants of PSO is observed in solving benchmark optimization and renewable hybrid system problems.

  1. Estimating size and scope economies in the Portuguese water sector using the Bayesian stochastic frontier analysis.

    PubMed

    Carvalho, Pedro; Marques, Rui Cunha

    2016-02-15

    This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services.

  2. Measurement of Work Processes Using Statistical Process Control: Instructor’s Manual

    DTIC Science & Technology

    1987-03-01

    quality improvement strategy. Journal of Business Strategy, 5 (3), 21-32. Garvin , D. A. (1983). Quality on the line. Harvard Business Review, 611( 5 ), 64... Feigenbaum , A. V. (1957). The challenge of total quality control. InJustrial Quality Control, 13(11), 17-23. B-3 Feigenbaum , A. V. (1984). The hard road...75. Garvin , D. A. (19S4). What does "product quality " really mean? Sloan Management " Rev lew , 26(1), 25-43. FIR, Gitlow, H. S., & Hertz, P.

  3. Bayesian supervised dimensionality reduction.

    PubMed

    Gönen, Mehmet

    2013-12-01

    Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.

  4. Bayesian Cherry Picking Revisited

    NASA Astrophysics Data System (ADS)

    Garrett, Anthony J. M.; Prozesky, Victor M.; Padayachee, J.

    2004-04-01

    Tins are marketed as containing nine cherries. To fill the tins, cherries are fed into a drum containing twelve holes through which air is sucked; either zero, one or two cherries stick in each hole. Dielectric measurements are then made on each hole. Three outcomes are distinguished: empty hole (which is reliable); one cherry (which indicates one cherry with high probability, or two cherries with a complementary low probability known from calibration); or an uncertain number (which also indicates one cherry or two, with known probabilities that are quite similar). A choice can be made from which holes simultaneously to discharge contents into the tin. The sum and product rules of probability are applied in a Bayesian manner to find the distribution for the number of cherries in the tin. Based on this distribution, ways are discussed to optimise the number to nine cherries.

  5. Bayesian inference in geomagnetism

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.

  6. Identifying the controls of wildfire activity in Namibia using multivariate statistics

    NASA Astrophysics Data System (ADS)

    Mayr, Manuel; Le Roux, Johan; Samimi, Cyrus

    2015-04-01

    data mining techniques to select a conceivable set of variables by their explanatory value and to remove redundancy. We will then apply two multivariate statistical methods suitable to a large variety of data types and frequently used for (non-linear) causative factor identification: Non-metric Multidimensional Scaling (NMDS) and Regression Trees. The assumed value of these analyses is i) to determine the most important predictor variables of fire activity in Namibia, ii) to decipher their complex interactions in driving fire variability in Namibia, and iii) to compare the performance of two state-of-the-art statistical methods. References: Le Roux, J. (2011): The effect of land use practices on the spatial and temporal characteristics of savanna fires in Namibia. Doctoral thesis at the University of Erlangen-Nuremberg/Germany - 155 pages.

  7. Estimating hazardous concentrations by an informative Bayesian approach.

    PubMed

    Ciffroy, Philippe; Keller, Merlin; Pasanisi, Alberto

    2013-03-01

    The species sensitivity distribution (SSD) approach is recommended for assessing chemical risk. In practice, however, it can be used only for the few substances for which large-scale ecotoxicological results are available. Indeed, the statistical frequentist approaches used for building SSDs and for deriving hazardous concentrations (HC5) inherently require extensive data to guarantee goodness-of-fit. An alternative Bayesian approach to estimating HC5 from small data sets was developed. In contrast to the noninformative Bayesian approaches that have been tested to date, the authors' method used informative priors related to the expected species sensitivity variance. This method was tested on actual ecotoxicological data for 21 well-informed substances. A cross-validation compared the HC5 values calculated using frequentist approaches with the results of our Bayesian approach, using both complete and truncated data samples. The authors' informative Bayesian approach was compared with noninformative Bayesian methods published in the past, including those incorporating loss functions. The authors found that even for the truncated sample the HC5 values derived from the informative Bayesian approach were generally close to those obtained using the frequentist approach, which requires more data. In addition, the probability of overestimating an HC5 is rather limited. More robust HC5 estimates can be practically obtained from additional data without impairing regulatory protection levels, which will encourage collecting new ecotoxicological data. In conclusion, the Bayesian informative approach was shown to be relatively robust and could be a good surrogate approach for deriving HC5 values from small data sets.

  8. Analyzing Data from a Pretest-Posttest Control Group Design: The Importance of Statistical Assumptions

    ERIC Educational Resources Information Center

    Zientek, Linda; Nimon, Kim; Hammack-Brown, Bryn

    2016-01-01

    Purpose: Among the gold standards in human resource development (HRD) research are studies that test theoretically developed hypotheses and use experimental designs. A somewhat typical experimental design would involve collecting pretest and posttest data on individuals assigned to a control or experimental group. Data from such a design that…

  9. Using Statistical Control Charts to Analyze Data from Student Evaluations of Teaching

    ERIC Educational Resources Information Center

    Marks, Neil B.; O'Connell, Richard T.

    2003-01-01

    In this paper, a method for analyzing data from student evaluations of teaching is presented. The first step of the process requires development of a regression model for teacher's summary rating as a function of student's expected grade. Then, two-sigma control charts for individual evaluation scores (section averages) and residuals from the…

  10. Fast Bayesian inference of optical trap stiffness and particle diffusion

    PubMed Central

    Bera, Sudipta; Paul, Shuvojit; Singh, Rajesh; Ghosh, Dipanjan; Kundu, Avijit; Banerjee, Ayan; Adhikari, R.

    2017-01-01

    Bayesian inference provides a principled way of estimating the parameters of a stochastic process that is observed discretely in time. The overdamped Brownian motion of a particle confined in an optical trap is generally modelled by the Ornstein-Uhlenbeck process and can be observed directly in experiment. Here we present Bayesian methods for inferring the parameters of this process, the trap stiffness and the particle diffusion coefficient, that use exact likelihoods and sufficient statistics to arrive at simple expressions for the maximum a posteriori estimates. This obviates the need for Monte Carlo sampling and yields methods that are both fast and accurate. We apply these to experimental data and demonstrate their advantage over commonly used non-Bayesian fitting methods. PMID:28139705

  11. Using alien coins to test whether simple inference is Bayesian.

    PubMed

    Cassey, Peter; Hawkins, Guy E; Donkin, Chris; Brown, Scott D

    2016-03-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we asked people for prior and posterior inferences about the probability that 1 of 2 coins would generate certain outcomes. Most participants' inferences were inconsistent with Bayes' rule. Only in the simplest version of the task did the majority of participants adhere to Bayes' rule, but even in that case, there was a significant proportion that failed to do so. The current results highlight the importance of close quantitative comparisons between Bayesian inference and human data at the individual-subject level when evaluating models of cognition.

  12. Fast Bayesian inference of optical trap stiffness and particle diffusion

    NASA Astrophysics Data System (ADS)

    Bera, Sudipta; Paul, Shuvojit; Singh, Rajesh; Ghosh, Dipanjan; Kundu, Avijit; Banerjee, Ayan; Adhikari, R.

    2017-01-01

    Bayesian inference provides a principled way of estimating the parameters of a stochastic process that is observed discretely in time. The overdamped Brownian motion of a particle confined in an optical trap is generally modelled by the Ornstein-Uhlenbeck process and can be observed directly in experiment. Here we present Bayesian methods for inferring the parameters of this process, the trap stiffness and the particle diffusion coefficient, that use exact likelihoods and sufficient statistics to arrive at simple expressions for the maximum a posteriori estimates. This obviates the need for Monte Carlo sampling and yields methods that are both fast and accurate. We apply these to experimental data and demonstrate their advantage over commonly used non-Bayesian fitting methods.

  13. Bayesian failure probability model sensitivity study. Final report

    SciTech Connect

    Not Available

    1986-05-30

    The Office of the Manager, National Communications System (OMNCS) has developed a system-level approach for estimating the effects of High-Altitude Electromagnetic Pulse (HEMP) on the connectivity of telecommunications networks. This approach incorporates a Bayesian statistical model which estimates the HEMP-induced failure probabilities of telecommunications switches and transmission facilities. The purpose of this analysis is to address the sensitivity of the Bayesian model. This is done by systematically varying two model input parameters--the number of observations, and the equipment failure rates. Throughout the study, a non-informative prior distribution is used. The sensitivity of the Bayesian model to the noninformative prior distribution is investigated from a theoretical mathematical perspective.

  14. Population Forecasts for Bangladesh, Using a Bayesian Methodology

    PubMed Central

    Hossain, Syed Shahadat

    2012-01-01

    Population projection for many developing countries could be quite a challenging task for the demographers mostly due to lack of availability of enough reliable data. The objective of this paper is to present an overview of the existing methods for population forecasting and to propose an alternative based on the Bayesian statistics, combining the formality of inference. The analysis has been made using Markov Chain Monte Carlo (MCMC) technique for Bayesian methodology available with the software WinBUGS. Convergence diagnostic techniques available with the WinBUGS software have been applied to ensure the convergence of the chains necessary for the implementation of MCMC. The Bayesian approach allows for the use of observed data and expert judgements by means of appropriate priors, and a more realistic population forecasts, along with associated uncertainty, has been possible. PMID:23304912

  15. Bayesian learning of visual chunks by human observers.

    PubMed

    Orbán, Gergo; Fiser, József; Aslin, Richard N; Lengyel, Máté

    2008-02-19

    Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input.

  16. Bayesian estimation of isotopic age differences

    SciTech Connect

    Curl, R.L.

    1988-08-01

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of (4.4, 43.7) ka.

  17. Statistical Modelling for Controlled Drug Delivery Systems and its Applications in HPMC based Hydrogels

    NASA Astrophysics Data System (ADS)

    Ghosal, Kajal; Chandra, Aniruddha

    2010-10-01

    Different concentrations of hydrophobically modified hydroxypropyl methylcellulose (HPMC, 60 M Grade) and conventional hydrophilic hydroxypropyl methylcellulose (50 cPs) were used to prepare four topical hydrogel formulations using a model non steroidal anti-inflammatory drug (NSAID) diclofenac potassium (DP). For all the formulations, suitability of different common empirical (zero-order, first-order, and Higuchi), semi-empirical (Ritger-Peppas and Peppas-Sahlin), and some new statistical (logistic, log-logistic, Weibull, Gumbel, and generalized extreme value distribution) models to describe the drug release profile were tested through non-linear least-square curve fitting. A general purpose mathematical analysis tool MATLAB is used for the purpose. Further, instead of the widely used transformed linear fit method, direct fitting was used in the paper to avoid any sort of truncation and transformation errors. The results revealed that the log-logistic distribution, among all the models that were investigated, was the best fit for hydrophobic formulations. For hydrophilic cases, the semi-empirical models and Weibull distribution worked best, although log-logistic also showed a close fit.

  18. Statistical tools and control of internal lubricant content of inhalation grade HPMC capsules during manufacture.

    PubMed

    Ayala, Guillermo; Díez, Fernando; Gassó, María T; Jones, Brian E; Martín-Portugués, Rafael; Ramiro-Aparicio, Juan

    2016-04-30

    The internal lubricant content (ILC) of inhalation grade HPMC capsules is a key factor to ensure good powder release when the patient inhales a medicine from a dry powder inhaler (DPI). Powder release from capsules has been shown to be influenced by the ILC. The characteristics used to measure this are the emitted dose, fine particle fraction and mass median aerodynamic diameter. In addition the ILC level is critical for capsule shell manufacture because it is an essential part of the process that cannot work without it. An experiment has been applied to the manufacture of inhalation capsules with the required ILC. A full factorial model was used to identify the controlling factors and from this a linear model has been proposed to improve control of the process.

  19. The Application of Statistical Process Control in Non-Manufacturing Activities

    DTIC Science & Technology

    1988-01-01

    difficult to establish a satisfactory per- formance level. Consider timeliness. When a customer brings his car -24 -- 0N in for a brake job and it is...process variability or centering on tarpt. This illustrates the detection/reaction mode of management that is so prevalent in service industries. Its...This type of,.-., response is known as the prevention/control mode of management. When .* operating in this mode effort is directed towara continuing

  20. Bayesian networks as a tool for epidemiological systems analysis

    NASA Astrophysics Data System (ADS)

    Lewis, F. I.

    2012-11-01

    Bayesian network analysis is a form of probabilistic modeling which derives from empirical data a directed acyclic graph (DAG) describing the dependency structure between random variables. Bayesian networks are increasingly finding application in areas such as computational and systems biology, and more recently in epidemiological analyses. The key distinction between standard empirical modeling approaches, such as generalised linear modeling, and Bayesian network analyses is that the latter attempts not only to identify statistically associated variables, but to additionally, and empirically, separate these into those directly and indirectly dependent with one or more outcome variables. Such discrimination is vastly more ambitious but has the potential to reveal far more about key features of complex disease systems. Applying Bayesian network modeling to biological and medical data has considerable computational demands, combined with the need to ensure robust model selection given the vast model space of possible DAGs. These challenges require the use of approximation techniques, such as the Laplace approximation, Markov chain Monte Carlo simulation and parametric bootstrapping, along with computational parallelization. A case study in structure discovery - identification of an optimal DAG for given data - is presented which uses additive Bayesian networks to explore veterinary disease data of industrial and medical relevance.

  1. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors

    PubMed Central

    Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437

  2. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.

    PubMed

    Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.

  3. A bayesian approach to classification criteria for spectacled eiders

    USGS Publications Warehouse

    Taylor, B.L.; Wade, P.R.; Stehn, R.A.; Cochrane, J.F.

    1996-01-01

    To facilitate decisions to classify species according to risk of extinction, we used Bayesian methods to analyze trend data for the Spectacled Eider, an arctic sea duck. Trend data from three independent surveys of the Yukon-Kuskokwim Delta were analyzed individually and in combination to yield posterior distributions for population growth rates. We used classification criteria developed by the recovery team for Spectacled Eiders that seek to equalize errors of under- or overprotecting the species. We conducted both a Bayesian decision analysis and a frequentist (classical statistical inference) decision analysis. Bayesian decision analyses are computationally easier, yield basically the same results, and yield results that are easier to explain to nonscientists. With the exception of the aerial survey analysis of the 10 most recent years, both Bayesian and frequentist methods indicated that an endangered classification is warranted. The discrepancy between surveys warrants further research. Although the trend data are abundance indices, we used a preliminary estimate of absolute abundance to demonstrate how to calculate extinction distributions using the joint probability distributions for population growth rate and variance in growth rate generated by the Bayesian analysis. Recent apparent increases in abundance highlight the need for models that apply to declining and then recovering species.

  4. Cortical hierarchies perform Bayesian causal inference in multisensory perception.

    PubMed

    Rohe, Tim; Noppeney, Uta

    2015-02-01

    To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the "causal inference problem." Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.

  5. Sparsity and the Bayesian perspective

    NASA Astrophysics Data System (ADS)

    Starck, J.-L.; Donoho, D. L.; Fadili, M. J.; Rassat, A.

    2013-04-01

    Sparsity has recently been introduced in cosmology for weak-lensing and cosmic microwave background (CMB) data analysis for different applications such as denoising, component separation, or inpainting (i.e., filling the missing data or the mask). Although it gives very nice numerical results, CMB sparse inpainting has been severely criticized by top researchers in cosmology using arguments derived from a Bayesian perspective. In an attempt to understand their point of view, we realize that interpreting a regularization penalty term as a prior in a Bayesian framework can lead to erroneous conclusions. This paper is by no means against the Bayesian approach, which has proven to be very useful for many applications, but warns against a Bayesian-only interpretation in data analysis, which can be misleading in some cases.

  6. Evaluation of Various Radar Data Quality Control Algorithms Based on Accumulated Radar Rainfall Statistics

    NASA Technical Reports Server (NTRS)

    Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.

  7. Use of Computer Statistical Packages to Generate Quality Control Reports on Training

    DTIC Science & Technology

    1980-01-01

    dI NI 40MMY 6V 6locknIumibff) Training Data Processing Management Attitudes (Psychology) Computers Computer Applications Automation Computer Programs... processing and produces graphic displays very similar to quality control charts. [The output allows a ai4eeto fdvAn alp advmmnI. anaw. 4u OP W 3 on"" of...David W. Bessemer Acession Forand -- kF Brian L. Kottas Submitted by: ByDonald F. Haggard, Chief ByFORT KNOX FIELD UNIT r o:, A&1 1l.d/Qr sp a C i ELI

  8. Statistical optimization of controlled release microspheres containing cetirizine hydrochloride as a model for water soluble drugs.

    PubMed

    El-Say, Khalid M; El-Helw, Abdel-Rahim M; Ahmed, Osama A A; Hosny, Khaled M; Ahmed, Tarek A; Kharshoum, Rasha M; Fahmy, Usama A; Alsawahli, Majed

    2015-01-01

    The purpose was to improve the encapsulation efficiency of cetirizine hydrochloride (CTZ) microspheres as a model for water soluble drugs and control its release by applying response surface methodology. A 3(3) Box-Behnken design was used to determine the effect of drug/polymer ratio (X1), surfactant concentration (X2) and stirring speed (X3), on the mean particle size (Y1), percentage encapsulation efficiency (Y2) and cumulative percent drug released for 12 h (Y3). Emulsion solvent evaporation (ESE) technique was applied utilizing Eudragit RS100 as coating polymer and span 80 as surfactant. All formulations were evaluated for micromeritic properties and morphologically characterized by scanning electron microscopy (SEM). The relative bioavailability of the optimized microspheres was compared with CTZ marketed product after oral administration on healthy human volunteers using a double blind, randomized, cross-over design. The results revealed that the mean particle sizes of the microspheres ranged from 62 to 348 µm and the efficiency of entrapment ranged from 36.3% to 70.1%. The optimized CTZ microspheres exhibited a slow and controlled release over 12 h. The pharmacokinetic data of optimized CTZ microspheres showed prolonged tmax, decreased Cmax and AUC0-∞ value of 3309 ± 211 ng h/ml indicating improved relative bioavailability by 169.4% compared with marketed tablets.

  9. Bayesian Calibration of Microsimulation Models.

    PubMed

    Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E

    2009-12-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.

  10. BayGO: Bayesian analysis of ontology term enrichment in microarray data

    PubMed Central

    Vêncio, Ricardo ZN; Koide, Tie; Gomes, Suely L; de B Pereira, Carlos A

    2006-01-01

    Background The search for enriched (aka over-represented or enhanced) ontology terms in a list of genes obtained from microarray experiments is becoming a standard procedure for a system-level analysis. This procedure tries to summarize the information focussing on classification designs such as Gene Ontology, KEGG pathways, and so on, instead of focussing on individual genes. Although it is well known in statistics that association and significance are distinct concepts, only the former approach has been used to deal with the ontology term enrichment problem. Results BayGO implements a Bayesian approach to search for enriched terms from microarray data. The R source-code is freely available at in three versions: Linux, which can be easily incorporated into pre-existent pipelines; Windows, to be controlled interactively; and as a web-tool. The software was validated using a bacterial heat shock response dataset, since this stress triggers known system-level responses. Conclusion The Bayesian model accounts for the fact that, eventually, not all the genes from a given category are observable in microarray data due to low intensity signal, quality filters, genes that were not spotted and so on. Moreover, BayGO allows one to measure the statistical association between generic ontology terms and differential expression, instead of working only with the common significance analysis. PMID:16504085

  11. Integrative Bayesian Analysis of Neuroimaging-Genetic Data with Application to Cocaine Dependence

    PubMed Central

    Azadeh, Shabnam; Hobbs, Brian P.; Ma, Liangsuo; Nielsen, David A.; Moeller, F. Gerard; Baladandayuthapani, Veerabhadran

    2016-01-01

    Neuroimaging and genetic studies provide distinct and complementary information about the structural and biological aspects of a disease. Integrating the two sources of data facilitates the investigation of the links between genetic variability and brain mechanisms among different individuals for various medical disorders. This article presents a general statistical framework for integrative Bayesian analysis of neuroimaging-genetic (iBANG) data, which is motivated by a neuroimaging-genetic study in cocaine dependence. Statistical inference necessitated the integration of spatially dependent voxel-level measurements with various patient-level genetic and demographic characteristics under an appropriate probability model to account for the multiple inherent sources of variation. Our framework uses Bayesian model averaging to integrate genetic information into the analysis of voxel-wise neuroimaging data, accounting for spatial correlations in the voxels. Using multiplicity controls based on the false discovery rate, we delineate voxels associated with genetic and demographic features that may impact diffusion as measured by fractional anisotropy (FA) obtained from DTI images. We demonstrate the benefits of accounting for model uncertainties in both model fit and prediction. Our results suggest that cocaine consumption is associated with FA reduction in most white matter regions of interest in the brain. Additionally, gene polymorphisms associated with GABAergic, serotonergic and dopaminergic neurotransmitters and receptors were associated with FA. PMID:26484829

  12. Single-agent maintenance therapy for advanced non-small cell lung cancer (NSCLC): a systematic review and Bayesian network meta-analysis of 26 randomized controlled trials

    PubMed Central

    Zeng, Xiaoning; Ma, Yuan

    2016-01-01

    Background The benefit of maintenance therapy has been confirmed in patients with non-progressing non-small cell lung cancer (NSCLC) after first-line therapy by many trials and meta-analyses. However, since few head-to-head trials between different regimens have been reported, clinicians still have little guidance on how to select the most efficacious single-agent regimen. Hence, we present a network meta-analysis to assess the comparative treatment efficacy of several single-agent maintenance therapy regimens for stage III/IV NSCLC. Methods A comprehensive literature search of public databases and conference proceedings was performed. Randomized clinical trials (RCTs) meeting the eligible criteria were integrated into a Bayesian network meta-analysis. The primary outcome was overall survival (OS) and the secondary outcome was progression free survival (PFS). Results A total of 26 trials covering 7,839 patients were identified, of which 24 trials were included in the OS analysis, while 23 trials were included in the PFS analysis. Switch-racotumomab-alum vaccine and switch-pemetrexed were identified as the most efficacious regimens based on OS (HR, 0.64; 95% CrI, 0.45–0.92) and PFS (HR, 0.54; 95% CrI, 0.26–1.04) separately. According to the rank order based on OS, switch-racotumomab-alum vaccine had the highest probability as the most effective regimen (52%), while switch-pemetrexed ranked first (34%) based on PFS. Conclusions Several single-agent maintenance therapy regimens can prolong OS and PFS for stage III/IV NSCLC. Switch-racotumomab-alum vaccine maintenance therapy may be the most optimal regimen, but should be confirmed by additional evidence. PMID:27781159

  13. Statistical Colocalization of Genetic Risk Variants for Related Autoimmune Diseases in the Context of Common Controls

    PubMed Central

    Fortune, Mary D.; Guo, Hui; Burren, Oliver; Schofield, Ellen; Walker, Neil M.; Ban, Maria; Sawcer, Stephen J.; Bowes, John; Worthington, Jane; Barton, Ann; Eyre, Steve; Todd, John A.; Wallace, Chris

    2015-01-01

    Identifying whether potential causal variants for related diseases are shared can identify overlapping etiologies of multifactorial disorders. Colocalization methods disentangle shared and distinct causal variants. However, existing approaches require independent datasets. Here we extend two colocalization methods to allow for the shared control design commonly used in comparison of genome-wide association study results across diseases. Our analysis of four autoimmune diseases, type 1 diabetes (T1D), rheumatoid arthritis, celiac disease and multiple sclerosis, revealed 90 regions that were associated with at least one disease, 33 (37%) of which with two or more disorders. Nevertheless, for 14 of these 33 shared regions there was evidence that causal variants differed. We identified novel disease associations in 11 regions previously associated with one or more of the other three disorders. Four of eight T1D-specific regions contained known type 2 diabetes candidate genes: COBL, GLIS3, RNLS and BCAR1, suggesting a shared cellular etiology. PMID:26053495

  14. Modeling controlled nutrient release from a population of polymer coated fertilizers: statistically based model for diffusion release.

    PubMed

    Shaviv, Avi; Raban, Smadar; Zaidel, Elina

    2003-05-15

    A statistically based model for describing the release from a population of polymer coated controlled release fertilizer (CRF) granules by the diffusion mechanism was constructed. The model is based on a mathematical-mechanistic description of the release from a single granule of a coated CRF accounting for its complex and nonlinear nature. The large variation within populations of coated CRFs poses the need for a statistically based approach to integrate over the release from the individual granules within a given population for which the distribution and range of granule radii and coating thickness are known. The model was constructed and verified using experimentally determined parameters and release curves of polymer-coated CRFs. A sensitivity analysis indicated the importance of water permeability in controlling the lag period and that of solute permeability in governing the rate of linear release and the total duration of the release. Increasing the mean values of normally distributed granule radii or coating thickness, increases the lag period and the period of linear release. The variation of radii and coating thickness, within realistic ranges, affects the release only when the standard deviation is very large or when water permeability is reduced without affecting solute permeability. The model provides an effective tool for designing and improving agronomic and environmental effectiveness of polymer-coated CRFs.

  15. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  16. Flood quantile estimation at ungauged sites by Bayesian networks

    NASA Astrophysics Data System (ADS)

    Mediero, L.; Santillán, D.; Garrote, L.

    2012-04-01

    stochastic generator of synthetic data was developed. Synthetic basin characteristics were randomised, keeping the statistical properties of observed physical and climatic variables in the homogeneous region. The synthetic flood quantiles were stochastically generated taking the regression equation as basis. The learnt Bayesian network was validated by the reliability diagram, the Brier Score and the ROC diagram, which are common measures used in the validation of probabilistic forecasts. Summarising, the flood quantile estimations through Bayesian networks supply information about the prediction uncertainty as a probability distribution function of discharges is given as result. Therefore, the Bayesian network model has application as a decision support for water resources and planning management.

  17. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    PubMed Central

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come. PMID:27034708

  18. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  19. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  20. Developmental Control of Stress Stimulons in Streptomyces coelicolor Revealed by Statistical Analyses of Global Gene Expression Patterns

    PubMed Central

    Vohradsky, J.; Li, X.-M.; Dale, G.; Folcher, M.; Nguyen, L.; Viollier, P. H.; Thompson, C. J.

    2000-01-01

    Stress-induced regulatory networks coordinated with a procaryotic developmental program were revealed by two-dimensional gel analyses of global gene expression. Four developmental stages were identified by their distinctive protein synthesis patterns using principal component analysis. Statistical analyses focused on five stress stimulons (induced by heat, cold, salt, ethanol, or antibiotic shock) and their synthesis during development. Unlike other bacteria, for which various stresses induce expression of similar sets of protein spots, in Streptomyces coelicolor heat, salt, and ethanol stimulons were composed of independent sets of proteins. This suggested independent control by different physiological stress signals and their corresponding regulatory systems. These stress proteins were also under developmental control. Cluster analysis of stress protein synthesis profiles identified 10 different developmental patterns or “synexpression groups.” Proteins induced by cold, heat, or salt shock were enriched in three developmental synexpression groups. In addition, certain proteins belonging to the heat and salt shock stimulons were coregulated during development. Thus, stress regulatory systems controlling these stimulons were implicated as integral parts of the developmental program. This correlation suggested that thermal shock and salt shock stress response regulatory systems either allow the cell to adapt to stresses associated with development or directly control the developmental program. PMID:10940043

  1. Bayesian design strategies for synthetic biology

    PubMed Central

    Barnes, Chris P.; Silk, Daniel; Stumpf, Michael P. H.

    2011-01-01

    We discuss how statistical inference techniques can be applied in the context of designing novel biological systems. Bayesian techniques have found widespread application and acceptance in the systems biology community, where they are used for both parameter estimation and model selection. Here we show that the same approaches can also be used in order to engineer synthetic biological systems by inferring the structure and parameters that are most likely to give rise to the dynamics that we require a system to exhibit. Problems that are shared between applications in systems and synthetic biology include the vast potential spaces that need to be searched for suitable models and model parameters; the complex forms of likelihood functions; and the interplay between noise at the molecular level and nonlinearity in the dynamics owing to often complex feedback structures. In order to meet these challenges, we have to develop suitable inferential tools and here, in particular, we illustrate the use of approximate Bayesian computation and unscented Kalman filtering-based approaches. These partly complementary methods allow us to tackle a number of recurring problems in the design of biological systems. After a brief exposition of these two methodologies, we focus on their application to oscillatory systems. PMID:23226588

  2. Bayesian Cosmic Web Reconstruction: BARCODE for Clusters

    NASA Astrophysics Data System (ADS)

    Bos, E. G. Patrick; van de Weygaert, Rien; Kitaura, Francisco; Cautun, Marius

    2016-10-01

    We describe the Bayesian \\barcode\\ formalism that has been designed towards the reconstruction of the Cosmic Web in a given volume on the basis of the sampled galaxy cluster distribution. Based on the realization that the massive compact clusters are responsible for the major share of the large scale tidal force field shaping the anisotropic and in particular filamentary features in the Cosmic Web. Given the nonlinearity of the constraints imposed by the cluster configurations, we resort to a state-of-the-art constrained reconstruction technique to find a proper statistically sampled realization of the original initial density and velocity field in the same cosmic region. Ultimately, the subsequent gravitational evolution of these initial conditions towards the implied Cosmic Web configuration can be followed on the basis of a proper analytical model or an N-body computer simulation. The BARCODE formalism includes an implicit treatment for redshift space distortions. This enables a direct reconstruction on the basis of observational data, without the need for a correction of redshift space artifacts. In this contribution we provide a general overview of the the Cosmic Web connection with clusters and a description of the Bayesian BARCODE formalism. We conclude with a presentation of its successful workings with respect to test runs based on a simulated large scale matter distribution, in physical space as well as in redshift space.

  3. Bayesian analysis of multiple direct detection experiments

    NASA Astrophysics Data System (ADS)

    Arina, Chiara

    2014-12-01

    Bayesian methods offer a coherent and efficient framework for implementing uncertainties into induction problems. In this article, we review how this approach applies to the analysis of dark matter direct detection experiments. In particular we discuss the exclusion limit of XENON100 and the debated hints of detection under the hypothesis of a WIMP signal. Within parameter inference, marginalizing consistently over uncertainties to extract robust posterior probability distributions, we find that the claimed tension between XENON100 and the other experiments can be partially alleviated in isospin violating scenario, while elastic scattering model appears to be compatible with the frequentist statistical approach. We then move to model comparison, for which Bayesian methods are particularly well suited. Firstly, we investigate the annual modulation seen in CoGeNT data, finding that there is weak evidence for a modulation. Modulation models due to other physics compare unfavorably with the WIMP models, paying the price for their excessive complexity. Secondly, we confront several coherent scattering models to determine the current best physical scenario compatible with the experimental hints. We find that exothermic and inelastic dark matter are moderatly disfavored against the elastic scenario, while the isospin violating model has a similar evidence. Lastly the Bayes' factor gives inconclusive evidence for an incompatibility between the data sets of XENON100 and the hints of detection. The same question assessed with goodness of fit would indicate a 2 σ discrepancy. This suggests that more data are therefore needed to settle this question.

  4. Statistical methods for establishing quality control ranges for antibacterial agents in Clinical and Laboratory Standards Institute susceptibility testing.

    PubMed

    Turnidge, John; Bordash, Gerry

    2007-07-01

    Quality control (QC) ranges for antimicrobial agents against QC strains for both dilution and disk diffusion testing are currently set by the Clinical and Laboratory Standards Institute (CLSI), using data gathered in predefined structured multilaboratory studies, so-called tier 2 studies. The ranges are finally selected by the relevant CLSI subcommittee, based largely on visual inspection and a few simple rules. We have developed statistical methods for analyzing the data from tier 2 studies and applied them to QC strain-antimicrobial agent combinations from 178 dilution testing data sets and 48 disk diffusion data sets, including a method for identifying possible outlier data from individual laboratories. The methods are based on the fact that dilution testing MIC data were log normally distributed and disk diffusion zone diameter data were normally distributed. For dilution testing, compared to QC ranges actually set by CLSI, calculated ranges were identical in 68% of cases, narrower in 7% of cases, and wider in 14% of cases. For disk diffusion testing, calculated ranges were identical to CLSI ranges in 33% of cases, narrower in 8% of cases, and 1 to 2 mm wider in 58% of cases. Possible outliers were detected in 8% of diffusion test data but none of the disk diffusion data. Application of statistical techniques to the analysis of QC tier 2 data and the setting of QC ranges is relatively simple to perform on spreadsheets, and the output enhances the current CLSI methods for setting of QC ranges.

  5. Maximum margin Bayesian network classifiers.

    PubMed

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian

    2012-03-01

    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  6. Bayesian Model Selection for Group Studies

    PubMed Central

    Stephan, Klaas Enno; Penny, Will D.; Daunizeau, Jean; Moran, Rosalyn J.; Friston, Karl J.

    2009-01-01

    Bayesian model selection (BMS) is a powerful method for determining the most likely among a set of competing hypotheses about the mechanisms that generated observed data. BMS has recently found widespread application in neuroimaging, particularly in the context of dynamic causal modelling (DCM). However, so far, combining BMS results from several subjects has relied on simple (fixed effects) metrics, e.g. the group Bayes factor (GBF), that do not account for group heterogeneity or outliers. In this paper, we compare the GBF with two random effects methods for BMS at the between-subject or group level. These methods provide inference on model-space using a classical and Bayesian perspective respectively. First, a classical (frequentist) approach uses the log model evidence as a subject-specific summary statistic. This enables one to use analysis of variance to test for differences in log-evidences over models, relative to inter-subject differences. We then consider the same problem in Bayesian terms and describe a novel hierarchical model, which is optimised to furnish a probability density on the models themselves. This new variational Bayes method rests on treating the model as a random variable and estimating the parameters of a Dirichlet distribution which describes the probabilities for all models considered. These probabilities then define a multinomial distribution over model space, allowing one to compute how likely it is that a specific model generated the data of a randomly chosen subject as well as the exceedance probability of one model being more likely than any other model. Using empirical and synthetic data, we show that optimising a conditional density of the model probabilities, given the log-evidences for each model over subjects, is more informative and appropriate than both the GBF and frequentist tests of the log-evidences. In particular, we found that the hierarchical Bayesian approach is considerably more robust than either of the other

  7. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  8. Understanding medical group financial and operational performance: the synergistic effect of linking statistical process control and profit and loss.

    PubMed

    Smolko, J R; Greisler, D S

    2001-01-01

    There is ongoing pressure for medical groups owned by not-for-profit health care systems or for-profit entrepreneurs to generate profit. The fading promise of superior strategy through health care integration has boards of directors clamoring for bottom-line performance. While prudent, sole focus on the bottom line through the lens of the profit-and-loss (P&L) statement provides incomplete information upon which to base executive decisions. The purpose of this paper is to suggest that placing statistical process control (SPC) charts in tandem with the P&L statement provides a more complete picture of medical group performance thereby optimizing decision making as executives deal with the whitewater issues surrounding physician practice ownership.

  9. Statistical evaluation of essential/toxic metal levels in the blood of valvular heart disease patients in comparison with controls.

    PubMed

    Ilyas, Asim; Shah, Munir H

    2017-02-28

    The present study was designed to investigate the role of selected essential and toxic metals in the onset/prognosis of valvular heart disease (VHD). Nitric acid-perchloric acid based wet digestion procedure was used for the quantification of the metals by flame atomic absorption spectrophotometry. Comparative appraisal of the data revealed that average levels of Cd, Co, Cr, Fe, K, Li, Mn and Zn were significantly higher in blood of VHD patients, while the average concentration of Ca was found at elevated level in controls (P < 0.05). However, Cu, Mg, Na, Sr and Pb depicted almost comparable levels in the blood of both donor groups. The correlation study revealed significantly different mutual associations among the metals in the blood of VHD patients compared with the controls. Multivariate statistical methods showed substantially divergent grouping of the metals for the patients and controls. Some significant differences in the metal concentrations were also observed with gender, abode, dietary/smoking habits and occupations of both donor groups. Overall, the study demonstrated that disproportions in the concentrations of essential/toxic metals in the blood are involved in pathogenesis of the disease.

  10. A scan statistic for identifying optimal risk windows in vaccine safety studies using self-controlled case series design.

    PubMed

    Xu, Stanley; Hambidge, Simon J; McClure, David L; Daley, Matthew F; Glanz, Jason M

    2013-08-30

    In the examination of the association between vaccines and rare adverse events after vaccination in postlicensure observational studies, it is challenging to define appropriate risk windows because prelicensure RCTs provide little insight on the timing of specific adverse events. Past vaccine safety studies have often used prespecified risk windows based on prior publications, biological understanding of the vaccine, and expert opinion. Recently, a data-driven approach was developed to identify appropriate risk windows for vaccine safety studies that use the self-controlled case series design. This approach employs both the maximum incidence rate ratio and the linear relation between the estimated incidence rate ratio and the inverse of average person time at risk, given a specified risk window. In this paper, we present a scan statistic that can identify appropriate risk windows in vaccine safety studies using the self-controlled case series design while taking into account the dependence of time intervals within an individual and while adjusting for time-varying covariates such as age and seasonality. This approach uses the maximum likelihood ratio test based on fixed-effects models, which has been used for analyzing data from self-controlled case series design in addition to conditional Poisson models.

  11. Bayesian shared frailty models for regional inference about wildlife survival

    USGS Publications Warehouse

    Heisey, D.M.

    2012-01-01

    One can joke that 'exciting statistics' is an oxymoron, but it is neither a joke nor an exaggeration to say that these are exciting times to be involved in statistical ecology. As Halstead et al.'s (2012) paper nicely exemplifies, recently developed Bayesian analyses can now be used to extract insights from data using techniques that would have been unavailable to the ecological researcher just a decade ago. Some object to this, implying that the subjective priors of the Bayesian approach is the pathway to perdition (e.g. Lele & Dennis, 2009). It is reasonable to ask whether these new approaches are really giving us anything that we could not obtain with traditional tried-and-true frequentist approaches. I believe the answer is a clear yes.

  12. Bayesian methods for the design and interpretation of clinical trials in very rare diseases

    PubMed Central

    Hampson, Lisa V; Whitehead, John; Eleftheriou, Despina; Brogan, Paul

    2014-01-01

    This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare-disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non-inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24957522

  13. Pain: A Statistical Account

    PubMed Central

    Thacker, Michael A.; Moseley, G. Lorimer

    2017-01-01

    Perception is seen as a process that utilises partial and noisy information to construct a coherent understanding of the world. Here we argue that the experience of pain is no different; it is based on incomplete, multimodal information, which is used to estimate potential bodily threat. We outline a Bayesian inference model, incorporating the key components of cue combination, causal inference, and temporal integration, which highlights the statistical problems in everyday perception. It is from this platform that we are able to review the pain literature, providing evidence from experimental, acute, and persistent phenomena to demonstrate the advantages of adopting a statistical account in pain. Our probabilistic conceptualisation suggests a principles-based view of pain, explaining a broad range of experimental and clinical findings and making testable predictions. PMID:28081134

  14. SU-C-BRD-01: A Statistical Modeling Method for Quality Control of Intensity- Modulated Radiation Therapy Planning

    SciTech Connect

    Gao, S; Meyer, R; Shi, L; D'Souza, W; Zhang, H

    2014-06-15

    Purpose: To apply a statistical modeling approach, threshold modeling (TM), for quality control of intensity-modulated radiation therapy (IMRT) treatment plans. Methods: A quantitative measure, which was the weighted sum of violations of dose/dose-volume constraints, was first developed to represent the quality of each IMRT plan. Threshold modeling approach, which is is an extension of extreme value theory in statistics and is an effect way to model extreme values, was then applied to analyze the quality of the plans summarized by our quantitative measures. Our approach modeled the plans generated by planners as a series of independent and identically distributed random variables and described the behaviors of them if the plan quality was controlled below certain threshold. We tested our approach with five locally advanced head and neck cancer patients retrospectively. Two statistics were incorporated for numerical analysis: probability of quality improvement (PQI) of the plans and expected amount of improvement on the quantitative measure (EQI). Results: After clinical planners generated 15 plans for each patient, we applied our approach to obtain the PQI and EQI as if planners would generate additional 15 plans. For two of the patients, the PQI was significantly higher than the other three (0.17 and 0.18 comparing to 0.08, 0.01 and 0.01). The actual percentage of the additional 15 plans that outperformed the best of initial 15 plans was 20% and 27% comparing to 11%, 0% and 0%. EQI for the two potential patients were 34.5 and 32.9 and the rest of three patients were 9.9, 1.4 and 6.6. The actual improvements obtained were 28.3 and 20.5 comparing to 6.2, 0 and 0. Conclusion: TM is capable of reliably identifying the potential quality improvement of IMRT plans. It provides clinicians an effective tool to assess the trade-off between extra planning effort and achievable plan quality. This work was supported in part by NIH/NCI grant CA130814.

  15. A Bayesian Statistics Year at the Ohio State University.

    DTIC Science & Technology

    1987-02-01

    Carnegie-Mellon University Prof. James Berger, Purdue University Prof. Katherine Chaloner, University of Minnesota Prof. Morris H. DeGroot , Carnegie-Mellon...City John Deely, University of Canterbury and Purdue University Dipak Dey, University of Connecticut Morris H. DeGroot , Carnegie-Mellon University & The...University of California at Riverside, "Asymptotics for the Ratio of Multiple t-densities" 10:30-11:00 Coffee 11-12:30: Utility and Likelihood Morris H

  16. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert

  17. Bayesian Inference for Nonnegative Matrix Factorisation Models

    PubMed Central

    Cemgil, Ali Taylan

    2009-01-01

    We describe nonnegative matrix factorisation (NMF) with a Kullback-Leibler (KL) error measure in a statistical framework, with a hierarchical generative model consisting of an observation and a prior component. Omitting the prior leads to the standard KL-NMF algorithms as special cases, where maximum likelihood parameter estimation is carried out via the Expectation-Maximisation (EM) algorithm. Starting from this view, we develop full Bayesian inference via variational Bayes or Monte Carlo. Our construction retains conjugacy and enables us to develop more powerful models while retaining attractive features of standard NMF such as monotonic convergence and easy implementation. We illustrate our approach on model order selection and image reconstruction. PMID:19536273

  18. Bayesian classification criterion for forensic multivariate data.

    PubMed

    Bozza, S; Broséus, J; Esseiva, P; Taroni, F

    2014-11-01

    This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.

  19. Bayesian population finding with biomarkers in a randomized clinical trial.

    PubMed

    Morita, Satoshi; Müller, Peter

    2017-03-03

    The identification of good predictive biomarkers allows investigators to optimize the target population for a new treatment. We propose a novel utility-based Bayesian population finding (BaPoFi) method to analyze data from a randomized clinical trial with the aim of finding a sensitive patient population. Our approach is based on casting the population finding process as a formal decision problem together with a flexible probability model, Bayesian additive regression trees (BART), to summarize observed data. The proposed method evaluates enhanced treatment effects in patient subpopulations based on counter-factual modeling of responses to new treatment and control for each patient. In extensive simulation studies, we examine the operating characteristics of the proposed method. We compare with a Bayesian regression-based method that implements shrinkage estimates of subgroup-specific treatment effects. For illustration, we apply the proposed method to data from a randomized clinical trial.

  20. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  1. A Bayesian Nonparametric Approach to Test Equating

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  2. Bayesian phylogenetic estimation of fossil ages

    PubMed Central

    Drummond, Alexei J.; Stadler, Tanja

    2016-01-01

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth–death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the ‘morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses. This article is part of the themed issue ‘Dating species divergences

  3. A Variational Bayesian Approach to Multiframe Image Restoration.

    PubMed

    Sonogashira, Motoharu; Funatomi, Takuya; Iiyama, Masaaki; Minoh, Michihiko

    2017-03-06

    Image restoration is a fundamental problem in the field of image processing. The key objective of image restoration is to recover clean images from images degraded by noise and blur. Recently, a family of new statistical techniques called variational Bayes (VB) has been introduced to image restoration, which enables us to automatically tune parameters that control restoration. While information from one image is often insufficient for high-quality restoration, however, current state-of-theart methods of image restoration via VB approaches use only a single degraded image to recover a clean image. In this paper, we propose a novel method of multiframe image restoration via a VB approach, which can achieve higher image quality while tuning parameters automatically. Given multiple degraded images, this method jointly estimates a clean image and other parameters, including an image warping parameter introduced for the use of multiple images, through Bayesian inference that we enable by making full use of VB techniques. Through various experiments, we demonstrate the effectiveness of our multiframe method by comparing it with single-frame one, and also show the advantages of our VB approach over non-VB approaches.

  4. Bayesian analysis of a reduced-form air quality model.

    PubMed

    Foley, Kristen M; Reich, Brian J; Napelenok, Sergey L

    2012-07-17

    Numerical air quality models are being used for assessing emission control strategies for improving ambient pollution levels across the globe. This paper applies probabilistic modeling to evaluate the effectiveness of emission reduction scenarios aimed at lowering ground-level ozone concentrations. A Bayesian hierarchical model is used to combine air quality model output and monitoring data in order to characterize the impact of emissions reductions while accounting for different degrees of uncertainty in the modeled emissions inputs. The probabilistic model predictions are weighted based on population density in order to better quantify the societal benefits/disbenefits of four hypothetical emission reduction scenarios in which domain-wide NO(x) emissions from various sectors are reduced individually and then simultaneously. Cross validation analysis shows the statistical model performs well compared to observed ozone levels. Accounting for the variability and uncertainty in the emissions and atmospheric systems being modeled is shown to impact how emission reduction scenarios would be ranked, compared to standard methodology.

  5. SU-E-CAMPUS-T-04: Statistical Process Control for Patient-Specific QA in Proton Beams

    SciTech Connect

    LAH, J; SHIN, D; Kim, G

    2014-06-15

    Purpose: To evaluate and improve the reliability of proton QA process, to provide an optimal customized level using the statistical process control (SPC) methodology. The aim is then to suggest the suitable guidelines for patient-specific QA process. Methods: We investigated the constancy of the dose output and range to see whether it was within the tolerance level of daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to suggest the suitable guidelines for patient-specific QA in proton beam by using process capability indices. In this study, patient QA plans were classified into 6 treatment sites: head and neck (41 cases), spinal cord (29 cases), lung (28 cases), liver (30 cases), pancreas (26 cases), and prostate (24 cases). Results: The deviations for the dose output and range of daily QA process were ±0.84% and ±019%, respectively. Our results show that the patient-specific range measurements are capable at a specification limit of ±2% in all treatment sites except spinal cord cases. In spinal cord cases, comparison of process capability indices (Cp, Cpm, Cpk ≥1, but Cpmk ≤1) indicated that the process is capable, but not centered, the process mean deviates from its target value. The UCL (upper control limit), CL (center line) and LCL (lower control limit) for spinal cord cases were 1.37%, −0.27% and −1.89%, respectively. On the other hands, the range differences in prostate cases were good agreement between calculated and measured values. The UCL, CL and LCL for prostate cases were 0.57%, −0.11% and −0.78%, respectively. Conclusion: SPC methodology has potential as a useful tool to customize an optimal tolerance levels and to suggest the suitable guidelines for patient-specific QA in clinical proton beam.

  6. Evidence cross-validation and Bayesian inference of MAST plasma equilibria

    NASA Astrophysics Data System (ADS)

    von Nessi, G. T.; Hole, M. J.; Svensson, J.; Appel, L.

    2012-01-01

    In this paper, current profiles for plasma discharges on the mega-ampere spherical tokamak are directly calculated from pickup coil, flux loop, and motional-Stark effect observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the joint-European tokamak [Svensson and Werner,Plasma Phys. Controlled Fusion 50(8), 085002 (2008)]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable a good agreement between Bayesian inference of the last-closed flux-surface with other corroborating data, such as that from force balance considerations using EFIT++ [Appel et al., "A unified approach to equilibrium reconstruction" Proceedings of the 33rd EPS Conference on Plasma Physics (Rome, Italy, 2006)]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry as well as directly predicting the Shafranov shift of the plasma core.

  7. Evidence cross-validation and Bayesian inference of MAST plasma equilibria

    SciTech Connect

    Nessi, G. T. von; Hole, M. J.; Svensson, J.; Appel, L.

    2012-01-15

    In this paper, current profiles for plasma discharges on the mega-ampere spherical tokamak are directly calculated from pickup coil, flux loop, and motional-Stark effect observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the joint-European tokamak [Svensson and Werner,Plasma Phys. Controlled Fusion 50(8), 085002 (2008)]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable a good agreement between Bayesian inference of the last-closed flux-surface with other corroborating data, such as that from force balance considerations using EFIT++[Appel et al., ''A unified approach to equilibrium reconstruction'' Proceedings of the 33rd EPS Conference on Plasma Physics (Rome, Italy, 2006)]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry as well as directly predicting the Shafranov shift of the plasma core.

  8. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    Typical estimates of standing wood derived from remote sensing sources take advantage of aggregate measurements of canopy heights (e.g. LIDAR) and canopy diameters (segmentation of IKONOS imagery) to obtain a wood volume estimate by assuming homogeneous species and a fixed function that returns volume. The validation of such techniques use manually measured diameter at breast height records (DBH). Our goal is to improve the accuracy and applicability of biomass estimation methods to heterogeneous forests and transitional areas. We are developing estimates with quantifiable uncertainty using a new form of estimation function, active sampling, and volumetric reconstruction image rendering for species specific mass truth. Initially we are developing a Bayesian adaptive sampling method for BRDF associated with the MISR Rahman model with respect to categorical biomes. This involves characterizing the probability distributions of the 3 free parameters of the Rahman model for the 6 categories of biomes used by MISR. Subsequently, these distributions can be used to determine the optimal sampling methodology to distinguish biomes during acquisition. We have a remotely controlled semi-autonomous helicopter that has stereo imaging, lidar, differential GPS, and spectrometers covering wavelengths from visible to NIR. We intend to automatically vary the way points of the flight path via the Bayesian adaptive sampling method. The second critical part of this work is in automating the validation of biomass estimates via using machine vision techniques. This involves taking 2-D pictures of trees of known species, and then via Bayesian techniques, reconstructing 3-D models of the trees to estimate the distribution moments associated with wood volume. Similar techniques have been developed by the medical imaging community. This then provides probability distributions conditional upon species. The final part of this work is in relating the BRDF actively sampled measurements to species

  9. Using Bayesian analysis in repeated preclinical in vivo studies for a more effective use of animals.

    PubMed

    Walley, Rosalind; Sherington, John; Rastrick, Joe; Detrait, Eric; Hanon, Etienne; Watt, Gillian

    2016-05-01

    Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta-analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study-to-study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide-induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the "3Rs initiative" to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Word Learning as Bayesian Inference

    ERIC Educational Resources Information Center

    Xu, Fei; Tenenbaum, Joshua B.

    2007-01-01

    The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word's referents, by making rational inductive inferences that integrate prior knowledge about plausible word meanings with…

  11. A method for evaluating treatment quality using in vivo EPID dosimetry and statistical process control in radiation therapy.

    PubMed

    Fuangrod, Todsaporn; Greer, Peter B; Simpson, John; Zwan, Benjamin J; Middleton, Richard H

    2017-03-13

    Purpose Due to increasing complexity, modern radiotherapy techniques require comprehensive quality assurance (QA) programmes, that to date generally focus on the pre-treatment stage. The purpose of this paper is to provide a method for an individual patient treatment QA evaluation and identification of a "quality gap" for continuous quality improvement. Design/methodology/approach A statistical process control (SPC) was applied to evaluate treatment delivery using in vivo electronic portal imaging device (EPID) dosimetry. A moving range control chart was constructed to monitor the individual patient treatment performance based on a control limit generated from initial data of 90 intensity-modulated radiotherapy (IMRT) and ten volumetric-modulated arc therapy (VMAT) patient deliveries. A process capability index was used to evaluate the continuing treatment quality based on three quality classes: treatment type-specific, treatment linac-specific, and body site-specific. Findings The determined control limits were 62.5 and 70.0 per cent of the χ pass-rate for IMRT and VMAT deliveries, respectively. In total, 14 patients were selected for a pilot study the results of which showed that about 1 per cent of all treatments contained errors relating to unexpected anatomical changes between treatment fractions. Both rectum and pelvis cancer treatments demonstrated process capability indices were less than 1, indicating the potential for quality improvement and hence may benefit from further assessment. Research limitations/implications The study relied on the application of in vivo EPID dosimetry for patients treated at the specific centre. Sampling patients for generating the control limits were limited to 100 patients. Whilst the quantitative results are specific to the clinical techniques and equipment used, the described method is generally applicable to IMRT and VMAT treatment QA. Whilst more work is required to determine the level of clinical significance, the

  12. Computer program for prediction of fuel consumption statistical data for an upper stage three-axes stabilized on-off control system

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.

  13. Bayesian Alternation during Tactile Augmentation

    PubMed Central

    Goeke, Caspar M.; Planera, Serena; Finger, Holger; König, Peter

    2016-01-01

    A large number of studies suggest that the integration of multisensory signals by humans is well-described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study, we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC) task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition), rotation only (native condition), and both augmented and native information (bimodal condition). Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants' responses with a probit model and calculated the just notable difference (JND). Then, we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67) than the Bayesian integration model (χred2 = 4.34). Slightly higher accuracy showed a non-Bayesian winner takes all (WTA) model (χred2 = 1.64), which either used only native or only augmented values per subject for prediction. However, the performance of the Bayesian alternation model could be substantially improved (χred2 = 1.09) utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in

  14. Bayesian analysis of rare events

    NASA Astrophysics Data System (ADS)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  15. Theory-based Bayesian models of inductive learning and reasoning.

    PubMed

    Tenenbaum, Joshua B; Griffiths, Thomas L; Kemp, Charles

    2006-07-01

    Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.

  16. Bayesian analysis of truncation errors in chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Melendez, J.; Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.

    2016-09-01

    In the Bayesian approach to effective field theory (EFT) expansions, truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. By encoding expectations about the naturalness of EFT expansion coefficients for observables, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. We extend and test previous calculations of DOB intervals for chiral EFT observables, examine correlations between contributions at different orders and energies, and explore methods to validate the statistical consistency of the EFT expansion parameter. Supported in part by the NSF and the DOE.

  17. Bayesian analysis of a multivariate null intercept errors-in-variables regression model.

    PubMed

    Aoki, Reiko; Bolfarine, Heleno; Achcar, Jorge A; Dorival, Leão P Júnior

    2003-11-01

    Longitudinal data are of great interest in analysis of clinical trials. In many practical situations the covariate can not be measured precisely and a natural alternative model is the errors-in-variables regression models. In this paper we study a null intercept errors-in-variables regression model with a structure of dependency between the response variables within the same group. We apply the model to real data presented in Hadgu and Koch (Hadgu, A., Koch, G. (1999). Application of generalized estimating equations to a dental randomized clinical trial. J. Biopharmaceutical Statistics 9(1):161-178). In that study volunteers with preexisting dental plaque were randomized to two experimental mouth rinses (A and B) or a control mouth rinse with double blinding. The dental plaque index was measured for each subject in the beginning of the study and at two follow-up times, which leads to the presence of an interclass correlation. We propose the use of a Bayesian approach to model a multivariate null intercept errors-in-variables regression model to the longitudinal data. The proposed Bayesian approach accommodates the correlated measurements and incorporates the restriction that the slopes must lie in the (0, 1) interval. A Gibbs sampler is used to perform the computations.

  18. BAYESIAN SEMIPARAMETRIC ANALYSIS FOR TWO-PHASE STUDIES OF GENE-ENVIRONMENT INTERACTION

    PubMed Central

    Ahn, Jaeil; Mukherjee, Bhramar; Gruber, Stephen B.; Ghosh, Malay

    2013-01-01

    The two-phase sampling design is a cost-efficient way of collecting expensive covariate information on a judiciously selected sub-sample. It is natural to apply such a strategy for collecting genetic data in a sub-sample enriched for exposure to environmental factors for gene-environment interaction (G × E) analysis. In this paper, we consider two-phase studies of G × E interaction where phase I data are available on exposure, covariates and disease status. Stratified sampling is done to prioritize individuals for genotyping at phase II conditional on disease and exposure. We consider a Bayesian analysis based on the joint retrospective likelihood of phase I and phase II data. We address several important statistical issues: (i) we consider a model with multiple genes, environmental factors and their pairwise interactions. We employ a Bayesian variable selection algorithm to reduce the dimensionality of this potentially high-dimensional model; (ii) we use the assumption of gene-gene and gene-environment independence to trade-off between bias and efficiency for estimating the interaction parameters through use of hierarchical priors reflecting this assumption; (iii) we posit a flexible model for the joint distribution of the phase I categorical variables using the non-parametric Bayes construction of Dunson and Xing (2009). We carry out a small-scale simulation study to compare the proposed Bayesian method with weighted likelihood and pseudo likelihood methods that are standard choices for analyzing two-phase data. The motivating example originates from an ongoing case-control study of colorectal cancer, where the goal is to explore the interaction between the use of statins (a drug used for lowering lipid levels) and 294 genetic markers in the lipid metabolism/cholesterol synthesis pathway. The sub-sample of cases and controls on which these genetic markers were measured is enriched in terms of statin users. The example and simulation results illustrate that the

  19. Mediator effect of statistical process control between Total Quality Management (TQM) and business performance in Malaysian Automotive Industry

    NASA Astrophysics Data System (ADS)

    Ahmad, M. F.; Rasi, R. Z.; Zakuan, N.; Hisyamudin, M. N. N.

    2015-12-01

    In today's highly competitive market, Total Quality Management (TQM) is vital management tool in ensuring a company can success in their business. In order to survive in the global market with intense competition amongst regions and enterprises, the adoption of tools and techniques are essential in improving business performance. There are consistent results between TQM and business performance. However, only few previous studies have examined the mediator effect namely statistical process control (SPC) between TQM and business performance. A mediator is a third variable that changes the association between an independent variable and an outcome variable. This study present research proposed a TQM performance model with mediator effect of SPC with structural equation modelling, which is a more comprehensive model for developing countries, specifically for Malaysia. A questionnaire was prepared and sent to 1500 companies from automotive industry and the related vendors in Malaysia, giving a 21.8 per cent rate. Attempts were made at findings significant impact of mediator between TQM practices and business performance showed that SPC is important tools and techniques in TQM implementation. The result concludes that SPC is partial correlation between and TQM and BP with indirect effect (IE) is 0.25 which can be categorised as high moderator effect.

  20. A Bayesian approach to probabilistic sensitivity analysis in structured benefit-risk assessment.

    PubMed

    Waddingham, Ed; Mt-Isa, Shahrul; Nixon, Richard; Ashby, Deborah

    2016-01-01

    Quantitative decision models such as multiple criteria decision analysis (MCDA) can be used in benefit-risk assessment to formalize trade-offs between benefits and risks, providing transparency to the assessment process. There is however no well-established method for propagating uncertainty of treatment effects data through such models to provide a sense of the variability of the benefit-risk balance. Here, we present a Bayesian statistical method that directly models the outcomes observed in randomized placebo-controlled trials and uses this to infer indirect comparisons between competing active treatments. The resulting treatment effects estimates are suitable for use within the MCDA setting, and it is possible to derive the distribution of the overall benefit-risk balance through Markov Chain Monte Carlo simulation. The method is illustrated using a case study of natalizumab for relapsing-remitting multiple sclerosis.

  1. Statistical Modeling Efforts for Headspace Gas

    SciTech Connect

    Weaver, Brian Phillip

    2016-03-17

    The purpose of this document is to describe the statistical modeling effort for gas concentrations in WIPP storage containers. The concentration (in ppm) of CO2 in the headspace volume of standard waste box (SWB) 68685 is shown. A Bayesian approach and an adaptive Metropolis-Hastings algorithm were used.

  2. Modeling Statistical Insensitivity: Sources of Suboptimal Behavior

    ERIC Educational Resources Information Center

    Gagliardi, Annie; Feldman, Naomi H.; Lidz, Jeffrey

    2017-01-01

    Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the…

  3. Bayesian networks for evaluation of evidence from forensic entomology.

    PubMed

    Andersson, M Gunnar; Sundström, Anders; Lindström, Anders

    2013-09-01

    In the aftermath of a CBRN incident, there is an urgent need to reconstruct events in order to bring the perpetrators to court and to take preventive actions for the future. The challenge is to discriminate, based on available information, between alternative scenarios. Forensic interpretation is used to evaluate to what extent results from the forensic investigation favor the prosecutors' or the defendants' arguments, using the framework of Bayesian hypothesis testing. Recently, several new scientific disciplines have been used in a forensic context. In the AniBioThreat project, the framework was applied to veterinary forensic pathology, tracing of pathogenic microorganisms, and forensic entomology. Forensic entomology is an important tool for estimating the postmortem interval in, for example, homicide investigations as a complement to more traditional methods. In this article we demonstrate the applicability of the Bayesian framework for evaluating entomological evidence in a forensic investigation through the analysis of a hypothetical scenario involving suspect movement of carcasses from a clandestine laboratory. Probabilities of different findings under the alternative hypotheses were estimated using a combination of statistical analysis of data, expert knowledge, and simulation, and entomological findings are used to update the beliefs about the prosecutors' and defendants' hypotheses and to calculate the value of evidence. The Bayesian framework proved useful for evaluating complex hypotheses using findings from several insect species, accounting for uncertainty about development rate, temperature, and precolonization. The applicability of the forensic statistic approach to evaluating forensic results from a CBRN incident is discussed.

  4. Bayesian analysis of the flutter margin method in aeroelasticity

    NASA Astrophysics Data System (ADS)

    Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit

    2016-12-01

    A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis-Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the flutter speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. It will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.

  5. Bayesian analysis of the flutter margin method in aeroelasticity

    SciTech Connect

    Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit

    2016-08-27

    A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the flutter speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. In conclusion, it will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.

  6. Bayesian analysis of the flutter margin method in aeroelasticity

    DOE PAGES

    Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit

    2016-08-27

    A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the fluttermore » speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. In conclusion, it will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.« less

  7. Bayesian parameter inference and model selection by population annealing in systems biology.

    PubMed

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  8. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  9. Bayesian Modeling of Time Trends in Component Reliability Data via Markov Chain Monte Carlo Simulation

    SciTech Connect

    D. L. Kelly

    2007-06-01

    Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.

  10. On becoming a Bayesian: early correspondences between J. Cornfield and L. J. Savage.

    PubMed

    Greenhouse, Joel B

    2012-10-30

    Jerome Cornfield was arguably the leading proponent for the use of Bayesian methods in biostatistics during the 1960s. Prior to 1963, however, Cornfield had no publications in the area of Bayesian statistics. At a time when frequentist methods were the dominant influence on statistical practice, Cornfield went against the mainstream and embraced Bayes. The goals of this paper are as follows: (i) to explore how and why this transformation came about and (ii) to provide some sense as to who Cornfield was and the context in which he worked.

  11. On Becoming a Bayesian: Early Correspondences between J Cornfield and LJ Savage

    PubMed Central

    Greenhouse, Joel B.

    2012-01-01

    Jerome Cornfield was arguably the leading proponent for the use of Bayesian methods in biostatistics during the 1960s. Prior to 1963, however, Cornfield had no publications in the area of Bayesian statistics. At a time when frequentist methods were the dominant influence on statistical practice, Cornfield went against the mainstream and embraced Bayes. The goal of this paper is (i) to explore how and why this transformation came about and (ii) to provide some sense as to who Cornfield was and the context in which he worked. PMID:22941781

  12. Merging Digital Surface Models Implementing Bayesian Approaches

    NASA Astrophysics Data System (ADS)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  13. A Bayesian approach to estimate evoked potentials.

    PubMed

    Sparacino, Giovanni; Milani, Stefano; Arslan, Edoardo; Cobelli, Claudio

    2002-06-01

    Several approaches, based on different assumptions and with various degree of theoretical sophistication and implementation complexity, have been developed for improving the measurement of evoked potentials (EP) performed by conventional averaging (CA). In many of these methods, one of the major challenges is the exploitation of a priori knowledge. In this paper, we present a new method where the 2nd-order statistical information on the background EEG and on the unknown EP, necessary for the optimal filtering of each sweep in a Bayesian estimation framework, is, respectively, estimated from pre-stimulus data and obtained through a multiple integration of a white noise process model. The latter model is flexible (i.e. it can be employed for a large class of EP) and simple enough to be easily identifiable from the post-stimulus data thanks to a smoothing criterion. The mean EP is determined as the weighted average of the filtered sweeps, where each weight is inversely proportional to the expected value of the norm of the correspondent filter error, a quantity determinable thanks to the employment of the Bayesian approach. The performance of the new approach is shown on both simulated and real auditory EP. A signal-to-noise ratio enhancement is obtained that can allow the (possibly automatic) identification of peak latencies and amplitudes with less sweeps than those required by CA. For cochlear EP, the method also allows the audiology investigator to gather new and clinically important information. The possibility of handling single-sweep analysis with further development of the method is also addressed.

  14. Bayesian Model Averaging for Propensity Score Analysis.

    PubMed

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  15. Bayesian analysis of structural equation models with dichotomous variables.

    PubMed

    Lee, Sik-Yum; Song, Xin-Yuan

    2003-10-15

    Structural equation modelling has been used extensively in the behavioural and social sciences for studying interrelationships among manifest and latent variables. Recently, its uses have been well recognized in medical research. This paper introduces a Bayesian approach to analysing general structural equation models with dichotomous variables. In the posterior analysis, the observed dichotomous data are augmented with the hypothetical missing values, which involve the latent variables in the model and the unobserved continuous measurements underlying the dichotomous data. An algorithm based on the Gibbs sampler is developed for drawing the parameters values and the hypothetical missing values from the joint posterior distributions. Useful statistics, such as the Bayesian estimates and their standard error estimates, and the highest posterior density intervals, can be obtained from the simulated observations. A posterior predictive p-value is used to test the goodness-of-fit of the posited model. The methodology is applied to a study of hypertensive patient non-adherence to medication.

  16. Bayesian restoration of ion channel records using hidden Markov models.

    PubMed

    Rosales, R; Stark, J A; Fitzgerald, W J; Hladky, S B

    2001-03-01

    Hidden Markov models have been used to restore recorded signals of single ion channels buried in background noise. Parameter estimation and signal restoration are usually carried out through likelihood maximization by using variants of the Baum-Welch forward-backward procedures. This paper presents an alternative approach for dealing with this inferential task. The inferences are made by using a combination of the framework provided by Bayesian statistics and numerical methods based on Markov chain Monte Carlo stochastic simulation. The reliability of this approach is tested by using synthetic signals of known characteristics. The expectations of the model parameters estimated here are close to those calculated using the Baum-Welch algorithm, but the present methods also yield estimates of their errors. Comparisons of the results of the Bayesian Markov Chain Monte Carlo approach with those obtained by filtering and thresholding demonstrate clearly the superiority of the new methods.

  17. An Overview of Bayesian Methods for Neural Spike Train Analysis

    PubMed Central

    2013-01-01

    Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed. PMID:24348527

  18. Bayesian and maximum likelihood estimation of hierarchical response time models

    PubMed Central

    Farrell, Simon; Ludwig, Casimir

    2008-01-01

    Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. We consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. Single-level estimation is compared with hierarchical estimation of parameters of the ex-Gaussian distribution. Additionally, for each approach maximum likelihood (ML) estimation is compared with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian by reducing the variability in recovered parameters. At each level, little overall difference was observed between the ML and Bayesian methods. PMID:19001592

  19. Bayesian inference from count data using discrete uniform priors.

    PubMed

    Comoglio, Federico; Fracchia, Letizia; Rinaldi, Maurizio

    2013-01-01

    We consider a set of sample counts obtained by sampling arbitrary fractions of a finite volume containing an homogeneously dispersed population of identical objects. We report a Bayesian derivation of the posterior probability distribution of the population size using a binomial likelihood and non-conjugate, discrete uniform priors under sampling with or without replacement. Our derivation yields a computationally feasible formula that can prove useful in a variety of statistical problems involving absolute quantification under uncertainty. We implemented our algorithm in the R package dupiR and compared it with a previously proposed Bayesian method based on a Gamma prior. As a showcase, we demonstrate that our inference framework can be used to estimate bacterial survival curves from measurements characterized by extremely low or zero counts and rather high sampling fractions. All in all, we provide a versatile, general purpose algorithm to infer population sizes from count data, which can find application in a broad spectrum of biological and physical problems.

  20. Bayesian Blocks: A New Method to Analyze Photon Counting Data

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; Bloom, Elliott D.; Young, Richard E. (Technical Monitor)

    1997-01-01

    A Bayesian analysis of photon-counting data leads to a new time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. The raw counting data -- time-tag events (TTE), time-to-spill (TTS) data, or binned counts -- is converted to a maximum likelihood segmentation of the observation into time intervals during which the photon arrival rate is perceptibly constant -- i.e. has a fixed intensity without statistically significant variations. The resulting structures, Bayesian Blocks, can be thought of as bins with arbitrary spacing determined by the data. The method itself sets no lower limit to the time scale on which variability can be detected. We have applied the method to RXTE data on Cyg X-1, yielding information on this source's short-time-scale variability.

  1. Bayesian non parametric modelling of Higgs pair production

    NASA Astrophysics Data System (ADS)

    Scarpa, Bruno; Dorigo, Tommaso

    2017-03-01

    Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART) to describe the atoms in the Dirichlet process.

  2. A Bayesian sequential processor approach to spectroscopic portal system decisions

    SciTech Connect

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  3. CRAFT (complete reduction to amplitude frequency table)--robust and time-efficient Bayesian approach for quantitative mixture analysis by NMR.

    PubMed

    Krishnamurthy, Krish

    2013-12-01

    The intrinsic quantitative nature of NMR is increasingly exploited in areas ranging from complex mixture analysis (as in metabolomics and reaction monitoring) to quality assurance/control. Complex NMR spectra are more common than not, and therefore, extraction of quantitative information generally involves significant prior knowledge and/or operator interaction to characterize resonances of interest. Moreover, in most NMR-based metabolomic experiments, the signals from metabolites are normally present as a mixture of overlapping resonances, making quantification difficult. Time-domain Bayesian approaches have been reported to be better than conventional frequency-domain analysis at identifying subtle changes in signal amplitude. We discuss an approach that exploits Bayesian analysis to achieve a complete reduction to amplitude frequency table (CRAFT) in an automated and time-efficient fashion - thus converting the time-domain FID to a frequency-amplitude table. CRAFT uses a two-step approach to FID analysis. First, the FID is digitally filtered and downsampled to several sub FIDs, and secondly, these sub FIDs are then modeled as sums of decaying sinusoids using the Bayesian approach. CRAFT tables can be used for further data mining of quantitative information using fingerprint chemical shifts of compounds of interest and/or statistical analysis of modulation of chemical quantity in a biological study (metabolomics) or process study (reaction monitoring) or quality assurance/control. The basic principles behind this approach as well as results to evaluate the effectiveness of this approach in mixture analysis are presented.

  4. A Preliminary Bayesian Analysis of Incomplete Longitudinal Data from a Small Sample: Methodological Advances in an International Comparative Study of Educational Inequality

    ERIC Educational Resources Information Center

    Hsieh, Chueh-An; Maier, Kimberly S.

    2009-01-01

    The capacity of Bayesian methods in estimating complex statistical models is undeniable. Bayesian data analysis is seen as having a range of advantages, such as an intuitive probabilistic interpretation of the parameters of interest, the efficient incorporation of prior information to empirical data analysis, model averaging and model selection.…

  5. Bayesian bias adjustments of the lung cancer SMR in a cohort of German carbon black production workers

    PubMed Central

    2010-01-01

    Background A German cohort study on 1,528 carbon black production workers estimated an elevated lung cancer SMR ranging from 1.8-2.2 depending on the reference population. No positive trends with carbon black exposures were noted in the analyses. A nested case control study, however, identified smoking and previous exposures to known carcinogens, such as crystalline silica, received prior to work in the carbon black industry as important risk factors. We used a Bayesian procedure to adjust the SMR, based on a prior of seven independent parameter distributions describing smoking behaviour and crystalline silica dust exposure (as indicator of a group of correlated carcinogen exposures received previously) in the cohort and population as well as the strength of the relationship of these factors with lung cancer mortality. We implemented the approach by Markov Chain Monte Carlo Methods (MCMC) programmed in R, a statistical computing system freely available on the internet, and we provide the program code. Results When putting a flat prior to the SMR a Markov chain of length 1,000,000 returned a median posterior SMR estimate (that is, the adjusted SMR) in the range between 1.32 (95% posterior interval: 0.7, 2.1) and 1.00 (0.2, 3.3) depending on the method of assessing previous exposures. Conclusions Bayesian bias adjustment is an excellent tool to effectively combine data about confounders from different sources. The usually calculated lung cancer SMR statistic in a cohort of carbon black workers overestimated effect and precision when compared with the Bayesian results. Quantitative bias adjustment should become a regular tool in occupational epidemiology to address narrative discussions of potential distortions. PMID:20701747

  6. The effect of calcium ions on the binomial statistic parameters that control acetylcholine release at preganglionic nerve terminals.

    PubMed Central

    Bennett, M R; Florin, T; Pettigrew, A G

    1976-01-01

    1. A study has been made of the effects of changing [Ca]O and [Mg]O on the binomial statistic parameters p and n that control the average quantal content (m) of the excitatory post-synaptic potential (e.p.s.p.) due to acetylcholine release at preganglionic nerve terminals. 2. When [Ca]O was increased in the range from 0-2 to 0-5 mM, p increased as the first power of [Ca]O whereas n increased as the 0-5 power of [Ca]O; when [Mg]O was increased in the range from 5 to 200 mM, p decreased as the first power of [Mg]O whereas n decreased as the 0-5 power of [Mg]O. 3. The increase in quantal release of a test impulse following a conditioning impulse was primarily due to an increase in n; the increase in quantal content of successive e.p.s.p.s in a short train was due to an increase in n and p, and the increase in n was quantitatively described in terms of the accumulation of a Ca-receptor complex in the nerve terminal. 4. The decrease in quantal content of successive e.p.s.p.s during long trains of impulses over several minutes was primarily due to a decrease in n. These results are discussed in terms of an hypothesis concerning the physical basis of n and p in the release process. PMID:181562

  7. Bayesian seismology of the Sun

    NASA Astrophysics Data System (ADS)

    Gruberbauer, M.; Guenther, D. B.

    2013-06-01

    We perform a Bayesian grid-based analysis of the solar l = 0, 1, 2 and 3 p modes obtained via BiSON in order to deliver the first Bayesian asteroseismic analysis of the solar composition problem. We do not find decisive evidence to prefer either of the contending chemical compositions, although the revised solar abundances (AGSS09) are more probable in general. We do find indications for systematic problems in standard stellar evolution models, unrelated to the consequences of inadequate modelling of the outer layers on the higher order modes. The seismic observables are best fitted by solar models that are several hundred million years older than the meteoritic age of the Sun. Similarly, meteoritic age calibrated models do not adequately reproduce the observed seismic observables. Our results suggest that these problems will affect any asteroseismic inference that relies on a calibration to the Sun.

  8. Bayesian estimation of turbulent motion.

    PubMed

    Héas, Patrick; Herzet, Cédric; Mémin, Etienne; Heitz, Dominique; Mininni, Pablo D

    2013-06-01

    Based on physical laws describing the multiscale structure of turbulent flows, this paper proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyperparameters, and to select the most likely physical prior among a set of models. Hyperparameter and model inference are conducted by posterior maximization, obtained by marginalizing out non--Gaussian motion variables. The Bayesian estimator is assessed on several image sequences depicting synthetic and real turbulent fluid flows. Results obtained with the proposed approach exceed the state-of-the-art results in fluid flow estimation.

  9. Deep Learning and Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Prosper, Harrison B.

    2017-03-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  10. Bayesian inference for agreement measures.

    PubMed

    Vidal, Ignacio; de Castro, Mário

    2016-08-25

    The agreement of different measurement methods is an important issue in several disciplines like, for example, Medicine, Metrology, and Engineering. In this article, some agreement measures, common in the literature, were analyzed from a Bayesian point of view. Posterior inferences for such agreement measures were obtained based on well-known Bayesian inference procedures for the bivariate normal distribution. As a consequence, a general, simple, and effective method is presented, which does not require Markov Chain Monte Carlo methods and can be applied considering a great variety of prior distributions. Illustratively, the method was exemplified using five objective priors for the bivariate normal distribution. A tool for assessing the adequacy of the model is discussed. Results from a simulation study and an application to a real dataset are also reported.

  11. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  12. Bayesian Treaty Monitoring: Preliminary Report

    DTIC Science & Technology

    2011-09-01

    J. Russell, P. Kidwell , and E. Sudderth (2011b). Global seismic monitoring: A Bayesian approach. In Proc. AAAI-11, San Francisco. Arora, N. S., S. J...Russell, P. Kidwell , and E. Sudderth (2011a). Global seismic monitoring as probabilistic inference. In Advances in Neural Information Processing...American Geophysical Union, 90(52), Fall Meeting Supplement, Abstract S31B-1713. Arora, N., Russell, S., de Salvo Braz, R ., and Sudderth, E. (2010b

  13. Bayesian inference for radio observations

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; Natarajan, Iniyan; Zwart, Jonathan T. L.; Smirnov, Oleg; Bassett, Bruce A.; Oozeer, Nadeem; Kunz, Martin

    2015-06-01

    New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inadequate uncertainty estimates and biased results because any correlations between parameters are ignored. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realization of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. This enables it to derive both correlations and accurate uncertainties, making use of the flexible software MEQTREES to model the sky and telescope simultaneously. We demonstrate BIRO with two simulated sets of Westerbork Synthesis Radio Telescope data sets. In the first, we perform joint estimates of 103 scientific (flux densities of sources) and instrumental (pointing errors, beamwidth and noise) parameters. In the second example, we perform source separation with BIRO. Using the Bayesian evidence, we can accurately select between a single point source, two point sources and an extended Gaussian source, allowing for `super-resolution' on scales much smaller than the synthesized beam.

  14. Quantum Inference on Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Yoder, Theodore; Low, Guang Hao; Chuang, Isaac

    2014-03-01

    Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.

  15. Building classifiers using Bayesian networks

    SciTech Connect

    Friedman, N.; Goldszmidt, M.

    1996-12-31

    Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness which are characteristic of naive Bayes. We experimentally tested these approaches using benchmark problems from the U. C. Irvine repository, and compared them against C4.5, naive Bayes, and wrapper-based feature selection methods.

  16. ANUBIS: artificial neuromodulation using a Bayesian inference system.

    PubMed

    Smith, Benjamin J H; Saaj, Chakravarthini M; Allouis, Elie

    2013-01-01

    Gain tuning is a crucial part of controller design and depends not only on an accurate understanding of the system in question, but also on the designer's ability to predict what disturbances and other perturbations the system will encounter throughout its operation. This letter presents ANUBIS (artificial neuromodulation using a Bayesian inference system), a novel biologically inspired technique for automatically tuning controller parameters in real time. ANUBIS is based on the Bayesian brain concept and modifies it by incorporating a model of the neuromodulatory system comprising four artificial neuromodulators. It has been applied to the controller of EchinoBot, a prototype walking rover for Martian exploration. ANUBIS has been implemented at three levels of the controller; gait generation, foot trajectory planning using Bézier curves, and foot trajectory tracking using a terminal sliding mode controller. We compare the results to a similar system that has been tuned using a multilayer perceptron. The use of Bayesian inference means that the system retains mathematical interpretability, unlike other intelligent tuning techniques, which use neural networks, fuzzy logic, or evolutionary algorithms. The simulation results show that ANUBIS provides significant improvements in efficiency and adaptability of the three controller components; it allows the robot to react to obstacles and uncertainties faster than the system tuned with the MLP, while maintaining stability and accuracy. As well as advancing rover autonomy, ANUBIS could also be applied to other situations where operating conditions are likely to change or cannot be accurately modeled in advance, such as process control. In addition, it demonstrates one way in which neuromodulation could fit into the Bayesian brain framework.

  17. Analyzing bioassay data using Bayesian methods--a primer.

    PubMed

    Miller, G; Inkret, W C; Schillaci, M E; Martz, H F; Little, T T

    2000-06-01

    The classical statistics approach used in health physics for the interpretation of measurements is deficient in that it does not take into account "needle in a haystack" effects, that is, correct identification of events that are rare in a population. This is often the case in health physics measurements, and the false positive fraction (the fraction of results measuring positive that are actually zero) is often very large using the prescriptions of classical statistics. Bayesian statistics provides a methodology to minimize the number of incorrect decisions (wrong calls): false positives and false negatives. We present the basic method and a heuristic discussion. Examples are given using numerically generated and real bioassay data for tritium. Various analytical models are used to fit the prior probability distribution in order to test the sensitivity to choice of model. Parametric studies show that for typical situations involving rare events the normalized Bayesian decision level k(alpha) = Lc/sigma0, where sigma0 is the measurement uncertainty for zero true amount, is in the range of 3 to 5 depending on the true positive rate. Four times sigma0 rather than approximately two times sigma0, as in classical statistics, would seem a better choice for the decision level in these situations.

  18. Analyzing bioassay data using Bayesian methods -- A primer

    SciTech Connect

    Miller, G.; Inkret, W.C.; Schillaci, M.E.

    1997-10-16

    The classical statistics approach used in health physics for the interpretation of measurements is deficient in that it does not allow for the consideration of needle in a haystack effects, where events that are rare in a population are being detected. In fact, this is often the case in health physics measurements, and the false positive fraction is often very large using the prescriptions of classical statistics. Bayesian statistics provides an objective methodology to ensure acceptably small false positive fractions. The authors present the basic methodology and a heuristic discussion. Examples are given using numerically generated and real bioassay data (Tritium). Various analytical models are used to fit the prior probability distribution, in order to test the sensitivity to choice of model. Parametric studies show that the normalized Bayesian decision level k{sub {alpha}}-L{sub c}/{sigma}{sub 0}, where {sigma}{sub 0} is the measurement uncertainty for zero true amount, is usually in the range from 3 to 5 depending on the true positive rate. Four times {sigma}{sub 0} rather than approximately two times {sigma}{sub 0}, as in classical statistics, would often seem a better choice for the decision level.

  19. Using Bayesian neural networks to classify forest scenes

    NASA Astrophysics Data System (ADS)

    Vehtari, Aki; Heikkonen, Jukka; Lampinen, Jouko; Juujarvi, Jouni

    1998-10-01

    We present results that compare the performance of Bayesian learning methods for neural networks on the task of classifying forest scenes into trees and background. Classification task is demanding due to the texture richness of the trees, occlusions of the forest scene objects and diverse lighting conditions under operation. This makes it difficult to determine which are optimal image features for the classification. A natural way to proceed is to extract many different types of potentially suitable features, and to evaluate their usefulness in later processing stages. One approach to cope with large number of features is to use Bayesian methods to control the model complexity. Bayesian learning uses a prior on model parameters, combines this with evidence from a training data, and the integrates over the resulting posterior to make predictions. With this method, we can use large networks and many features without fear of overfitting. For this classification task we compare two Bayesian learning methods for multi-layer perceptron (MLP) neural networks: (1) The evidence framework of MacKay uses a Gaussian approximation to the posterior weight distribution and maximizes with respect to hyperparameters. (2) In a Markov Chain Monte Carlo (MCMC) method due to Neal, the posterior distribution of the network parameters is numerically integrated using the MCMC method. As baseline classifiers for comparison we use (3) MLP early stop committee, (4) K-nearest-neighbor and (5) Classification And Regression Tree.

  20. Statistical Challenges of Astronomy

    NASA Astrophysics Data System (ADS)

    Feigelson, Eric D.; Babu, G. Jogesh

    Digital sky surveys, data from orbiting telescopes, and advances in computation have increased the quantity and quality of astronomical data by several orders of magnitude in recent years. Making sense of this wealth of data requires sophisticated statistical and data analytic techniques. Fortunately, statistical methodologies have similarly made great strides in recent years. Powerful synergies thus emerge when astronomers and statisticians join in examining astrostatistical problems and approaches. The volume focuses on several themes: · The increasing power of Bayesian approaches to modeling astronomical data · The growth of enormous databases, leading an emerging federated Virtual Observatory, and their impact on modern astronomical research · Statistical modeling of critical datasets, such as galaxy clustering and fluctuations in the microwave background radiation, leading to a new era of precision cosmology · Methodologies for uncovering clusters and patterns in multivariate data · The characterization of multiscale patterns in imaging and time series data As in earlier volumes in this series, research contributions discussing topics in one field are joined with commentary from scholars in the other. Short contributed papers covering dozens of astrostatistical topics are also included.